threads
listlengths
1
2.99k
[ { "msg_contents": "The syntax for like_option in CREATE TABLE docs seems to forget to mention\nINCLUDING COMPRESSION option. I think the following fix is necessary.\nPatch attached.\n\n-{ INCLUDING | EXCLUDING } { COMMENTS | CONSTRAINTS | DEFAULTS | GENERATED | IDENTITY | INDEXES | STATISTICS | STORAGE | ALL }\n+{ INCLUDING | EXCLUDING } { COMMENTS | COMPRESSION | CONSTRAINTS | DEFAULTS | GENERATED | IDENTITY | INDEXES | STATISTICS | STORAGE | ALL }\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Wed, 14 Apr 2021 23:46:58 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "INCLUDING COMPRESSION" }, { "msg_contents": "On Wed, Apr 14, 2021 at 11:46:58PM +0900, Fujii Masao wrote:\n> The syntax for like_option in CREATE TABLE docs seems to forget to mention\n> INCLUDING COMPRESSION option. I think the following fix is necessary.\n> Patch attached.\n> \n> -{ INCLUDING | EXCLUDING } { COMMENTS | CONSTRAINTS | DEFAULTS | GENERATED | IDENTITY | INDEXES | STATISTICS | STORAGE | ALL }\n> +{ INCLUDING | EXCLUDING } { COMMENTS | COMPRESSION | CONSTRAINTS | DEFAULTS | GENERATED | IDENTITY | INDEXES | STATISTICS | STORAGE | ALL }\n\nIndeed. May I ask at the same time why gram.y (TableLikeOption) and\nparsenodes.h (CREATE_TABLE_LIKE_COMPRESSION) don't classify this new\noption in alphabetical order with the rest? Ordering them makes\neasier a review of them.\n--\nMichael", "msg_date": "Thu, 15 Apr 2021 11:54:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: INCLUDING COMPRESSION" }, { "msg_contents": "On 2021/04/15 11:54, Michael Paquier wrote:\n> On Wed, Apr 14, 2021 at 11:46:58PM +0900, Fujii Masao wrote:\n>> The syntax for like_option in CREATE TABLE docs seems to forget to mention\n>> INCLUDING COMPRESSION option. I think the following fix is necessary.\n>> Patch attached.\n>>\n>> -{ INCLUDING | EXCLUDING } { COMMENTS | CONSTRAINTS | DEFAULTS | GENERATED | IDENTITY | INDEXES | STATISTICS | STORAGE | ALL }\n>> +{ INCLUDING | EXCLUDING } { COMMENTS | COMPRESSION | CONSTRAINTS | DEFAULTS | GENERATED | IDENTITY | INDEXES | STATISTICS | STORAGE | ALL }\n> \n> Indeed.\n\nThanks! Pushed.\n\n> May I ask at the same time why gram.y (TableLikeOption) and\n> parsenodes.h (CREATE_TABLE_LIKE_COMPRESSION) don't classify this new\n> option in alphabetical order with the rest? Ordering them makes\n> easier a review of them.\n\nI'm not sure why. But +1 to make them in alphabetical order.\nPatch attached.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 15 Apr 2021 23:24:07 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: INCLUDING COMPRESSION" }, { "msg_contents": "On Thu, Apr 15, 2021 at 11:24:07PM +0900, Fujii Masao wrote:\n> I'm not sure why. But +1 to make them in alphabetical order.\n> Patch attached.\n\nLGTM.\n--\nMichael", "msg_date": "Fri, 16 Apr 2021 10:20:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: INCLUDING COMPRESSION" }, { "msg_contents": "(moved to -hackers)\n\nhttps://www.postgresql.org/message-id/flat/54d30e66-dbd6-5485-aaf6-a291ed55919d%40oss.nttdata.com\n\nOn Thu, Apr 15, 2021 at 11:24:07PM +0900, Fujii Masao wrote:\n> On 2021/04/15 11:54, Michael Paquier wrote:\n> > May I ask at the same time why gram.y (TableLikeOption) and\n> > parsenodes.h (CREATE_TABLE_LIKE_COMPRESSION) don't classify this new\n> > option in alphabetical order with the rest? Ordering them makes\n> > easier a review of them.\n> \n> I'm not sure why. But +1 to make them in alphabetical order.\n> Patch attached.\n\n+1 to your patch\n\n-- \nJustin", "msg_date": "Thu, 22 Apr 2021 18:51:23 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: INCLUDING COMPRESSION (sort enum fields)" }, { "msg_contents": "\n\nOn 2021/04/16 10:20, Michael Paquier wrote:\n> On Thu, Apr 15, 2021 at 11:24:07PM +0900, Fujii Masao wrote:\n>> I'm not sure why. But +1 to make them in alphabetical order.\n>> Patch attached.\n> \n> LGTM.\n\nPushed. Thanks!\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 23 Apr 2021 19:11:53 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: INCLUDING COMPRESSION" } ]
[ { "msg_contents": "In joinpath.c three times we reference \"extra_lateral_rels\" (with\nunderscores like it's a field), but as far as I can tell that's not a\nfield anywhere in the source code, and looking at the code that\nfollows it seems like it should be referencing \"lateral_relids\" (and\nthe \"extra\" is really \"extra [in relation to relids]\").\n\nAssuming that interpretation is correct, I'd attached a patch to\nchange all three occurrences to \"extra lateral_relids\" to reduce\nconfusion.\n\nThanks,\nJames", "msg_date": "Wed, 14 Apr 2021 11:36:38 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Possible typo/unclear comment in joinpath.c" }, { "msg_contents": "On Wed, Apr 14, 2021 at 11:36:38AM -0400, James Coleman wrote:\n> In joinpath.c three times we reference \"extra_lateral_rels\" (with\n> underscores like it's a field), but as far as I can tell that's not a\n> field anywhere in the source code, and looking at the code that\n> follows it seems like it should be referencing \"lateral_relids\" (and\n> the \"extra\" is really \"extra [in relation to relids]\").\n\nIt looks like a loose end from \n\ncommit edca44b1525b3d591263d032dc4fe500ea771e0e\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Mon Dec 7 18:56:14 2015 -0500\n\n Simplify LATERAL-related calculations within add_paths_to_joinrel().\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 14 Apr 2021 11:42:53 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Possible typo/unclear comment in joinpath.c" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Wed, Apr 14, 2021 at 11:36:38AM -0400, James Coleman wrote:\n>> In joinpath.c three times we reference \"extra_lateral_rels\" (with\n>> underscores like it's a field), but as far as I can tell that's not a\n>> field anywhere in the source code, and looking at the code that\n>> follows it seems like it should be referencing \"lateral_relids\" (and\n>> the \"extra\" is really \"extra [in relation to relids]\").\n\n> It looks like a loose end from \n\n> commit edca44b1525b3d591263d032dc4fe500ea771e0e\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Date: Mon Dec 7 18:56:14 2015 -0500\n\n> Simplify LATERAL-related calculations within add_paths_to_joinrel().\n\nYeah :-(. I'm usually pretty careful about grepping for comment\nreferences as well as code references to a field when I do something\nlike that, but obviously I missed that step that time.\n\nWill fix, thanks James!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 14 Apr 2021 13:27:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Possible typo/unclear comment in joinpath.c" }, { "msg_contents": "I wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n>> It looks like a loose end from \n>> commit edca44b1525b3d591263d032dc4fe500ea771e0e\n\n> Yeah :-(. I'm usually pretty careful about grepping for comment\n> references as well as code references to a field when I do something\n> like that, but obviously I missed that step that time.\n\nNo, I take that back. There were no references to extra_lateral_rels\nafter that commit; these comments were added by 45be99f8c, about\nsix weeks later. The latter was a pretty large patch and had\npresumably been under development for quite some time, so the comments\nwere probably accurate when written but didn't get updated.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 14 Apr 2021 14:32:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Possible typo/unclear comment in joinpath.c" }, { "msg_contents": "On Wed, Apr 14, 2021 at 2:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> No, I take that back. There were no references to extra_lateral_rels\n> after that commit; these comments were added by 45be99f8c, about\n> six weeks later. The latter was a pretty large patch and had\n> presumably been under development for quite some time, so the comments\n> were probably accurate when written but didn't get updated.\n\nWoops.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 14 Apr 2021 15:46:11 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Possible typo/unclear comment in joinpath.c" }, { "msg_contents": "On Wed, Apr 14, 2021 at 1:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > On Wed, Apr 14, 2021 at 11:36:38AM -0400, James Coleman wrote:\n> >> In joinpath.c three times we reference \"extra_lateral_rels\" (with\n> >> underscores like it's a field), but as far as I can tell that's not a\n> >> field anywhere in the source code, and looking at the code that\n> >> follows it seems like it should be referencing \"lateral_relids\" (and\n> >> the \"extra\" is really \"extra [in relation to relids]\").\n>\n> > It looks like a loose end from\n>\n> > commit edca44b1525b3d591263d032dc4fe500ea771e0e\n> > Author: Tom Lane <tgl@sss.pgh.pa.us>\n> > Date: Mon Dec 7 18:56:14 2015 -0500\n>\n> > Simplify LATERAL-related calculations within add_paths_to_joinrel().\n>\n> Yeah :-(. I'm usually pretty careful about grepping for comment\n> references as well as code references to a field when I do something\n> like that, but obviously I missed that step that time.\n>\n> Will fix, thanks James!\n>\n> regards, tom lane\n\nThanks for fixing, Tom!\n\nJames\n\n\n", "msg_date": "Wed, 14 Apr 2021 16:36:22 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible typo/unclear comment in joinpath.c" } ]
[ { "msg_contents": "Hi folks,\n\nEnclosed is a patch that expands the unit output for\npg_size_pretty(numeric) going up to Yottabytes; I reworked the existing\nnumeric output code to account for the larger number of units we're using\nrather than just adding nesting levels.\n\nThere are also a few other places that could benefit from expanded units,\nbut this is a standalone starting point.\n\nBest,\n\nDavid", "msg_date": "Wed, 14 Apr 2021 11:13:47 -0500", "msg_from": "David Christensen <david.christensen@crunchydata.com>", "msg_from_op": true, "msg_subject": "[PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "On Wed, Apr 14, 2021 at 11:13:47AM -0500, David Christensen wrote:\n> Enclosed is a patch that expands the unit output for\n> pg_size_pretty(numeric) going up to Yottabytes; I reworked the existing\n> numeric output code to account for the larger number of units we're using\n> rather than just adding nesting levels.\n> \n> There are also a few other places that could benefit from expanded units,\n> but this is a standalone starting point.\n\nPlease don't forget to add this patch to the next commit fest of July\nif you want to get some reviews:\nhttps://commitfest.postgresql.org/33/\n\nNote that the development of Postgres 14 is over, and that there was a\nfeature freeze last week, but this can be considered for 15.\n--\nMichael", "msg_date": "Thu, 15 Apr 2021 16:48:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "A second patch to teach the same units to pg_size_bytes().\n\nBest,\n\nDavid\n\nOn Wed, Apr 14, 2021 at 11:13 AM David Christensen <\ndavid.christensen@crunchydata.com> wrote:\n\n> Hi folks,\n>\n> Enclosed is a patch that expands the unit output for\n> pg_size_pretty(numeric) going up to Yottabytes; I reworked the existing\n> numeric output code to account for the larger number of units we're using\n> rather than just adding nesting levels.\n>\n> There are also a few other places that could benefit from expanded units,\n> but this is a standalone starting point.\n>\n> Best,\n>\n> David\n>", "msg_date": "Wed, 28 Apr 2021 10:44:11 -0500", "msg_from": "David Christensen <david.christensen@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nHi David,\r\n\r\nI was reviewing this patch and the compilation failed with following error on CentOS 7.\r\n\r\ndbsize.c: In function ‘pg_size_bytes’:\r\ndbsize.c:808:3: warning: ISO C90 forbids mixed declarations and code [-Wdeclaration-after-statement]\r\n const int unit_count = 9; /* sizeof units table */\r\n ^\r\ndbsize.c:809:3: error: variable length array ‘units’ is used [-Werror=vla]\r\n const char *units[unit_count] = {\r\n ^\r\n\r\nI believe \"unit_count\" ought to be a #define here.\r\n\r\nRegards,\r\nAsif\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Sun, 30 May 2021 12:38:36 +0000", "msg_from": "Asif Rehman <asifr.rehman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "New versions attached to address the initial CF feedback and rebase on HEAD\nas of now.\n\n0001-Expand-the-units-that-pg_size_pretty-numeric-knows-a.patch\n\n- expands the units that pg_size_pretty() can handle up to YB.\n\n0002-Expand-the-supported-units-in-pg_size_bytes-to-cover.patch\n\n- expands the units that pg_size_bytes() can handle, up to YB.", "msg_date": "Thu, 3 Jun 2021 14:17:53 -0500", "msg_from": "David Christensen <david.christensen@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": ">From: David Christensen <david.christensen@crunchydata.com> \r\n>Sent: Friday, June 4, 2021 4:18 AM\r\n>To: PostgreSQL-development <pgsql-hackers@postgresql.org>\r\n>Subject: Re: [PATCH] expand the units that pg_size_pretty supports on output\r\n>\r\n>New versions attached to address the initial CF feedback and rebase on HEAD as of now.\r\n>\r\n>0001-Expand-the-units-that-pg_size_pretty-numeric-knows-a.patch \r\n>\r\n>- expands the units that pg_size_pretty() can handle up to YB.\r\n>\r\n>0002-Expand-the-supported-units-in-pg_size_bytes-to-cover.patch\r\n>\r\n>- expands the units that pg_size_bytes() can handle, up to YB.\r\n>\r\nI don't see the need to extend the unit to YB.\r\nWhat use case do you have in mind?\r\n\r\nRegards,\r\nShinya Kato\r\n", "msg_date": "Mon, 14 Jun 2021 04:53:58 +0000", "msg_from": "<Shinya11.Kato@nttdata.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "> I don't see the need to extend the unit to YB.\n> What use case do you have in mind?\n\nPractical or no, I saw no reason not to support all defined units. I assume we’ll get to a need sooner or later. :)\n\nDavid\n\n\n\n", "msg_date": "Mon, 14 Jun 2021 15:11:37 -0500", "msg_from": "David Christensen <david.christensen@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": ">> I don't see the need to extend the unit to YB.\r\n>> What use case do you have in mind?\r\n>\r\n>Practical or no, I saw no reason not to support all defined units. I assume we’ll\r\n>get to a need sooner or later. :)\r\n\r\nThank you for your reply!\r\nHmmm, I didn't think YB was necessary, but what do others think?\r\n\r\nBest regards,\r\nShinya Kato\r\n\r\n", "msg_date": "Tue, 15 Jun 2021 09:24:05 +0000", "msg_from": "<Shinya11.Kato@nttdata.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "On Tue, 15 Jun 2021 at 21:24, <Shinya11.Kato@nttdata.com> wrote:\n> Hmmm, I didn't think YB was necessary, but what do others think?\n\nFor me personally, without consulting Wikipedia, I know that Petabyte\ncomes after Terabyte and then I'm pretty sure it's Exabyte. After\nthat, I'd need to check.\n\nAssuming I'm not the only person who can't tell exactly how many bytes\nare in a Yottabyte, would it actually be a readability improvement if\nwe started showing these units to people?\n\nI'd say there might be some argument to implement as far as PB one\nday, maybe not that far out into the future, especially if we got\nsomething like built-in clustering. But I just don't think there's any\nneed to go all out and take it all the way to YB. There's an above\nzero chance we'll break something of someones by doing this, so I\nthink any changes here should be driven off an actual requirement.\n\nI really think this change is more likely to upset someone than please someone.\n\nJust my thoughts.\n\nDavid\n\n\n", "msg_date": "Wed, 16 Jun 2021 01:26:34 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "On Tue, 15 Jun 2021 at 05:24, <Shinya11.Kato@nttdata.com> wrote:\n\n> >> I don't see the need to extend the unit to YB.\n> >> What use case do you have in mind?\n> >\n> >Practical or no, I saw no reason not to support all defined units. I\n> assume we’ll\n> >get to a need sooner or later. :)\n>\n> Thank you for your reply!\n> Hmmm, I didn't think YB was necessary, but what do others think?\n>\n\nIf I’m reading the code correctly, the difference between supporting YB and\nnot supporting it is whether there is a line for it in the list of prefixes\nand their multiples. As such, I don’t see why we’re even discussing whether\nor not to include all the standard prefixes. A YB is still an absurd amount\nof storage, but that’s not the point; just put all the standard prefixes\nand be done with it. If actual code changes were required in the new code\nas they are in the old it might be worth discussing.\n\nOne question: why is there no “k” in the list of prefixes?\n\nAlso: why not have only the prefixes in the array, and use a single fixed\noutput format \"%s %sB\" all the time?\n\nIt feels like it should be possible to calculate the appropriate idx to use\n(while adjusting the number to print as is done now) and then just have one\npsprintf call for all cases.\n\nA more significant question is YB vs. YiB. I know there is a long tradition\nwithin computer-related fields of saying that k = 1024, M = 1024^2, etc.,\nbut we’re not special enough to override the more general principles of SI\n(Système International) which provide that k = 1000, M = 1000^2 and so on\nuniversally and provide the alternate prefixes ki, Mi, etc. which use 1024\nas the multiple.\n\nSo I would suggest either display 2000000 as 2MB or as 1.907MiB.\n\nOn Tue, 15 Jun 2021 at 05:24, <Shinya11.Kato@nttdata.com> wrote:>> I don't see the need to extend the unit to YB.\n>> What use case do you have in mind?\n>\n>Practical or no, I saw no reason not to support all defined units. I assume we’ll\n>get to a need sooner or later. :)\n\nThank you for your reply!\nHmmm, I didn't think YB was necessary, but what do others think?If I’m reading the code correctly, the difference between supporting YB and not supporting it is whether there is a line for it in the list of prefixes and their multiples. As such, I don’t see why we’re even discussing whether or not to include all the standard prefixes. A YB is still an absurd amount of storage, but that’s not the point; just put all the standard prefixes and be done with it. If actual code changes were required in the new code as they are in the old it might be worth discussing.One question: why is there no “k” in the list of prefixes?Also: why not have only the prefixes in the array, and use a single fixed output format \"%s %sB\" all the time?It feels like it should be possible to calculate the appropriate idx to use (while adjusting the number to print as is done now) and then just have one psprintf call for all cases.A more significant question is YB vs. YiB. I know there is a long tradition within computer-related fields of saying that k = 1024, M = 1024^2, etc., but we’re not special enough to override the more general principles of SI (Système International) which provide that k = 1000, M = 1000^2 and so on universally and provide the alternate prefixes ki, Mi, etc. which use 1024 as the multiple.So I would suggest either display 2000000 as 2MB or as 1.907MiB.", "msg_date": "Tue, 15 Jun 2021 09:30:51 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "On Tue, Jun 15, 2021 at 8:31 AM Isaac Morland <isaac.morland@gmail.com>\nwrote:\n\n> On Tue, 15 Jun 2021 at 05:24, <Shinya11.Kato@nttdata.com> wrote:\n>\n>> >> I don't see the need to extend the unit to YB.\n>> >> What use case do you have in mind?\n>> >\n>> >Practical or no, I saw no reason not to support all defined units. I\n>> assume we’ll\n>> >get to a need sooner or later. :)\n>>\n>> Thank you for your reply!\n>> Hmmm, I didn't think YB was necessary, but what do others think?\n>>\n>\n> If I’m reading the code correctly, the difference between supporting YB\n> and not supporting it is whether there is a line for it in the list of\n> prefixes and their multiples. As such, I don’t see why we’re even\n> discussing whether or not to include all the standard prefixes. A YB is\n> still an absurd amount of storage, but that’s not the point; just put all\n> the standard prefixes and be done with it. If actual code changes were\n> required in the new code as they are in the old it might be worth\n> discussing.\n>\n\nAgreed, this is why I went this way. One and done.\n\n\n> One question: why is there no “k” in the list of prefixes?\n>\n\nkB has a special-case code block before you get to this point. I didn't\nlook into the reasons, but assume there are some.\n\n\n> Also: why not have only the prefixes in the array, and use a single fixed\n> output format \"%s %sB\" all the time?\n>\n> It feels like it should be possible to calculate the appropriate idx to\n> use (while adjusting the number to print as is done now) and then just have\n> one psprintf call for all cases.\n>\n\nSure, if that seems more readable/understandable.\n\n\n> A more significant question is YB vs. YiB. I know there is a long\n> tradition within computer-related fields of saying that k = 1024, M =\n> 1024^2, etc., but we’re not special enough to override the more general\n> principles of SI (Système International) which provide that k = 1000, M =\n> 1000^2 and so on universally and provide the alternate prefixes ki, Mi,\n> etc. which use 1024 as the multiple.\n>\n> So I would suggest either display 2000000 as 2MB or as 1.907MiB.\n>\n\nHeh, I was just expanding the existing logic; if others want to have this\nparticular battle go ahead and I'll adjust the code/prefixes, but obviously\nthe logic will need to change if we want to support true MB instead of MiB\nas MB.\n\nAlso, this will presumably be a breaking change for anyone using the\nexisting units MB == 1024 * 1024, as we've had for something like 20\nyears. Changing these units to the *iB will be trivial with this patch,\nbut not looking forward to garnering the consensus to change this part.\n\nDavid\n\nOn Tue, Jun 15, 2021 at 8:31 AM Isaac Morland <isaac.morland@gmail.com> wrote:On Tue, 15 Jun 2021 at 05:24, <Shinya11.Kato@nttdata.com> wrote:>> I don't see the need to extend the unit to YB.\n>> What use case do you have in mind?\n>\n>Practical or no, I saw no reason not to support all defined units. I assume we’ll\n>get to a need sooner or later. :)\n\nThank you for your reply!\nHmmm, I didn't think YB was necessary, but what do others think?If I’m reading the code correctly, the difference between supporting YB and not supporting it is whether there is a line for it in the list of prefixes and their multiples. As such, I don’t see why we’re even discussing whether or not to include all the standard prefixes. A YB is still an absurd amount of storage, but that’s not the point; just put all the standard prefixes and be done with it. If actual code changes were required in the new code as they are in the old it might be worth discussing.Agreed, this is why I went this way.  One and done. One question: why is there no “k” in the list of prefixes?kB has a special-case code block before you get to this point.  I didn't look into the reasons, but assume there are some. Also: why not have only the prefixes in the array, and use a single fixed output format \"%s %sB\" all the time?It feels like it should be possible to calculate the appropriate idx to use (while adjusting the number to print as is done now) and then just have one psprintf call for all cases.Sure, if that seems more readable/understandable. A more significant question is YB vs. YiB. I know there is a long tradition within computer-related fields of saying that k = 1024, M = 1024^2, etc., but we’re not special enough to override the more general principles of SI (Système International) which provide that k = 1000, M = 1000^2 and so on universally and provide the alternate prefixes ki, Mi, etc. which use 1024 as the multiple.So I would suggest either display 2000000 as 2MB or as 1.907MiB.Heh, I was just expanding the existing logic; if others want to have this particular battle go ahead and I'll adjust the code/prefixes, but obviously the logic will need to change if we want to support true MB instead of MiB as MB.Also, this will presumably be a breaking change for anyone using the existing units MB == 1024 * 1024, as we've had for something like 20 years.  Changing these units to the *iB will be trivial with this patch, but not looking forward to garnering the consensus to change this part.David", "msg_date": "Tue, 15 Jun 2021 09:51:59 -0500", "msg_from": "David Christensen <david.christensen@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "On Tue, Jun 15, 2021 at 8:26 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Tue, 15 Jun 2021 at 21:24, <Shinya11.Kato@nttdata.com> wrote:\n> > Hmmm, I didn't think YB was necessary, but what do others think?\n>\n> For me personally, without consulting Wikipedia, I know that Petabyte\n> comes after Terabyte and then I'm pretty sure it's Exabyte. After\n> that, I'd need to check.\n>\n> Assuming I'm not the only person who can't tell exactly how many bytes\n> are in a Yottabyte, would it actually be a readability improvement if\n> we started showing these units to people?\n>\n\nI hadn't really thought about that TBH; to me it seemed like an\nimprovement, but I do see that others might not, and adding confusion is\ndefinitely not helpful. That said, it seems like having the code\nstructured in a way that we can expand via adding an element to a table\ninstead of the existing way it's written with nested if blocks is still a\nuseful refactor, whatever we decide the cutoff units should be.\n\n\n> I'd say there might be some argument to implement as far as PB one\n> day, maybe not that far out into the future, especially if we got\n> something like built-in clustering. But I just don't think there's any\n> need to go all out and take it all the way to YB. There's an above\n> zero chance we'll break something of someones by doing this, so I\n> think any changes here should be driven off an actual requirement.\n>\n\nI got motivated to do this due to some (granted synthetic) work/workloads,\nwhere I was seeing 6+digit TB numbers and thought it was ugly. Looked at\nthe code and thought the refactor was the way to go, and just stuck all of\nthe known units in.\n\n\n> I really think this change is more likely to upset someone than please\n> someone.\n>\n\nI'd be interested to see reactions from people; to me, it seems a +1, but\nseems like -1, 0, +1 all valid opinions here; I'd expect more 0's and +1s,\nbut I'm probably biased since I wrote this. :-)\n\nOn Tue, Jun 15, 2021 at 8:26 AM David Rowley <dgrowleyml@gmail.com> wrote:On Tue, 15 Jun 2021 at 21:24, <Shinya11.Kato@nttdata.com> wrote:\n> Hmmm, I didn't think YB was necessary, but what do others think?\n\nFor me personally, without consulting Wikipedia, I know that Petabyte\ncomes after Terabyte and then I'm pretty sure it's Exabyte.  After\nthat, I'd need to check.\n\nAssuming I'm not the only person who can't tell exactly how many bytes\nare in a Yottabyte, would it actually be a readability improvement if\nwe started showing these units to people?I hadn't really thought about that TBH; to me it seemed like an improvement, but I do see that others might not, and adding confusion is definitely not helpful.  That said, it seems like having the code structured in a way that we can expand via adding an element to a table instead of the existing way it's written with nested if blocks is still a useful refactor, whatever we decide the cutoff units should be. \nI'd say there might be some argument to implement as far as PB one\nday, maybe not that far out into the future, especially if we got\nsomething like built-in clustering. But I just don't think there's any\nneed to go all out and take it all the way to YB.  There's an above\nzero chance we'll break something of someones by doing this, so I\nthink any changes here should be driven off an actual requirement.I got motivated to do this due to some (granted synthetic) work/workloads, where I was seeing 6+digit TB numbers and thought it was ugly.  Looked at the code and thought the refactor was the way to go, and just stuck all of the known units in. \nI really think this change is more likely to upset someone than please someone.I'd be interested to see reactions from people; to me, it seems a +1, but seems like -1, 0, +1 all valid opinions here; I'd expect more 0's and +1s, but I'm probably biased since I wrote this. :-)", "msg_date": "Tue, 15 Jun 2021 09:58:06 -0500", "msg_from": "David Christensen <david.christensen@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "On Wed, 16 Jun 2021 at 02:58, David Christensen\n<david.christensen@crunchydata.com> wrote:\n> That said, it seems like having the code structured in a way that we can expand via adding an element to a table instead of the existing way it's written with nested if blocks is still a useful refactor, whatever we decide the cutoff units should be.\n\nI had not really looked at the patch, but if there's a cleanup portion\nto the same patch as you're adding the YB too, then maybe it's worth\nseparating those out into another patch so that the two can be\nconsidered independently.\n\nDavid\n\n\n", "msg_date": "Wed, 16 Jun 2021 03:33:05 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": ">I had not really looked at the patch, but if there's a cleanup portion to the same\r\n>patch as you're adding the YB too, then maybe it's worth separating those out\r\n>into another patch so that the two can be considered independently.\r\n\r\nI agree with this opinion. It seems to me that we should think about units and refactoring separately.\r\nSorry for the confusion.\r\n\r\nBest regards,\r\nShinya Kato\r\n", "msg_date": "Wed, 16 Jun 2021 02:17:41 +0000", "msg_from": "<Shinya11.Kato@nttdata.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "\n>> I had not really looked at the patch, but if there's a cleanup portion to the same\n>> patch as you're adding the YB too, then maybe it's worth separating those out\n>> into another patch so that the two can be considered independently.\n> \n> I agree with this opinion. It seems to me that we should think about units and refactoring separately.\n> Sorry for the confusion.\n\nSure thing, I think that makes sense. Refactor with existing units and debate the number of additions units to include. I do think Petabytes and Exabytes are at least within the realm of ones we should include; less tied to ZB and YB; just included for completeness. \n\n", "msg_date": "Tue, 15 Jun 2021 21:59:47 -0500", "msg_from": "David Christensen <david.christensen@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "Shinya11.Kato@nttdata.com writes:\n\n>>I had not really looked at the patch, but if there's a cleanup portion to the same\n>>patch as you're adding the YB too, then maybe it's worth separating those out\n>>into another patch so that the two can be considered independently.\n>\n> I agree with this opinion. It seems to me that we should think about units and refactoring separately.\n> Sorry for the confusion.\n>\n> Best regards,\n> Shinya Kato\n\nHi folks,\n\nHad some time to rework this patch from the two that had previously been\nhere into two separate parts:\n\n1) A basic refactor of the existing code to easily handle expanding the\nunits we use into a table-based format. This also includes changing the\nreturn value of `pg_size_bytes()` from an int64 into a numeric, and\nminor test adjustments to reflect this.\n\n2) Expanding the units that both pg_size_bytes() and pg_size_pretty()\nrecognize up through Yottabytes. This includes documentation and test\nupdates to reflect the changes made here. How many additional units we\nadd here is up for discussion (inevitably), but my opinion remains that\nthere is no harm in supporting all units available.\n\n\nBest,\n\nDavid", "msg_date": "Tue, 29 Jun 2021 12:11:12 -0500", "msg_from": "David Christensen <david.christensen@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "On Wed, 30 Jun 2021 at 05:11, David Christensen\n<david.christensen@crunchydata.com> wrote:\n> 1) A basic refactor of the existing code to easily handle expanding the\n> units we use into a table-based format. This also includes changing the\n> return value of `pg_size_bytes()` from an int64 into a numeric, and\n> minor test adjustments to reflect this.\n\nThis is not quite what I had imagined when you said about adding a\ntable to make it easier to add new units in the future. I expected a\nsingle table that handles all units, not just the ones above kB and\nnot one for each function.\n\nThere are actually two pg_size_pretty functions, one for BIGINT and\none for NUMERIC. I see you only changed the NUMERIC version. I'd\nexpect them both to have the same treatment and use the same table so\nthere's consistency between the two functions.\n\nThe attached is more like what I had in mind. There's a very small net\nreduction in lines of code with this and it also helps keep\npg_size_pretty() and pg_size_pretty_numeric() in sync.\n\nI don't really like the fact that I had to add the doHalfRound field\nto get the same rounding behaviour as the original functions. I'm\nwondering if it would just be too clever just to track how many bits\nwe've shifted right by in pg_size_pretty* and compare that to the\nvalue of multishift for the current unit and do appropriate rounding\nto get us to the value of multishift. In theory, we could just keep\ncalling the half_rounded macro until we make it to the multishift\nvalue. My current thoughts are that it's unlikely that anyone would\ntwiddle with the size_pretty_units array in such a way for the extra\ncode to be worth it. Maybe someone else feels differently.\n\nDavid", "msg_date": "Mon, 5 Jul 2021 20:00:39 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "On Mon, 5 Jul 2021 at 20:00, David Rowley <dgrowleyml@gmail.com> wrote:\n> I don't really like the fact that I had to add the doHalfRound field\n> to get the same rounding behaviour as the original functions. I'm\n> wondering if it would just be too clever just to track how many bits\n> we've shifted right by in pg_size_pretty* and compare that to the\n> value of multishift for the current unit and do appropriate rounding\n> to get us to the value of multishift. In theory, we could just keep\n> calling the half_rounded macro until we make it to the multishift\n> value. My current thoughts are that it's unlikely that anyone would\n> twiddle with the size_pretty_units array in such a way for the extra\n> code to be worth it. Maybe someone else feels differently.\n\nI made another pass over this and ended up removing the doHalfRound\nfield in favour of just doing rounding based on the previous\nbitshifts.\n\nI did a few other tidy ups and I think it's a useful patch as it\nreduces the amount of code a bit and makes it dead simple to add new\nunits in the future. Most importantly it'll help keep pg_size_pretty,\npg_size_pretty_numeric and pg_size_bytes all in sync in regards to\nwhat units they support.\n\nDoes anyone want to have a look over this? If not, I plan to push it\nin the next day or so.\n\n(I'm not sure why pgindent removed the space between != and NULL, but\nit did, so I left it.)\n\nDavid", "msg_date": "Tue, 6 Jul 2021 21:20:10 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "On Tue, 6 Jul 2021 at 10:20, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> I made another pass over this and ended up removing the doHalfRound\n> field in favour of just doing rounding based on the previous\n> bitshifts.\n>\n\nWhen I first read this:\n\n+ /* half-round until we get down to unitBits */\n+ while (rightshifts++ < unit->unitBits)\n+ size = half_rounded(size);\n\nit looked to me like it would be invoking half_rounded() multiple\ntimes, which raised alarm bells because that would risk rounding the\nwrong way. Eventually I realised that by the time it reaches that,\nrightshifts will always equal unit->unitBits or unit->unitBits - 1, so\nit'll never do more than one half-round, which is important.\n\nSo perhaps using doHalfRound would be clearer, but it could just be a\nlocal variable tracking whether or not it's the first time through the\nloop.\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 6 Jul 2021 12:39:13 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "On Tue, 6 Jul 2021 at 23:39, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> When I first read this:\n>\n> + /* half-round until we get down to unitBits */\n> + while (rightshifts++ < unit->unitBits)\n> + size = half_rounded(size);\n>\n> it looked to me like it would be invoking half_rounded() multiple\n> times, which raised alarm bells because that would risk rounding the\n> wrong way. Eventually I realised that by the time it reaches that,\n> rightshifts will always equal unit->unitBits or unit->unitBits - 1, so\n> it'll never do more than one half-round, which is important.\n\nIt's true that based on how the units table is set up now, it'll only\never do it once for all but the first loop.\n\nI wrote the attached .c file just to try to see if it ever goes wrong\nand I didn't manage to find any inputs where it did. I always seem to\nget the half rounded value either the same as the shifted value or 1\nhigher towards positive infinity\n\n$ ./half_rounded -102 3\n1. half_round(-102) == -51 :: -102 >> 1 = -51\n2. half_round(-51) == -25 :: -51 >> 1 = -26\n3. half_round(-25) == -12 :: -26 >> 1 = -13\n\n$ ./half_rounded 6432 3\n1. half_round(6432) == 3216 :: 6432 >> 1 = 3216\n2. half_round(3216) == 1608 :: 3216 >> 1 = 1608\n3. half_round(1608) == 804 :: 1608 >> 1 = 804\n\nCan you give an example where calling half_rounded too many times will\ngive the wrong value? Keeping in mind we call half_rounded the number\nof times that the passed in value would need to be left-shifted by to\nget the equivalent truncated value.\n\nDavid", "msg_date": "Wed, 7 Jul 2021 00:14:47 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "On Tue, 6 Jul 2021 at 13:15, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> Can you give an example where calling half_rounded too many times will\n> give the wrong value? Keeping in mind we call half_rounded the number\n> of times that the passed in value would need to be left-shifted by to\n> get the equivalent truncated value.\n>\n\n./half_rounded 10241 10\n1. half_round(10241) == 5121 :: 10241 >> 1 = 5120\n2. half_round(5121) == 2561 :: 5120 >> 1 = 2560\n3. half_round(2561) == 1281 :: 2560 >> 1 = 1280\n4. half_round(1281) == 641 :: 1280 >> 1 = 640\n5. half_round(641) == 321 :: 640 >> 1 = 320\n6. half_round(321) == 161 :: 320 >> 1 = 160\n7. half_round(161) == 81 :: 160 >> 1 = 80\n8. half_round(81) == 41 :: 80 >> 1 = 40\n9. half_round(41) == 21 :: 40 >> 1 = 20\n10. half_round(21) == 11 :: 20 >> 1 = 10\n\nThe correct result should be 10 (it would be very odd to claim that\n10241 bytes should be displayed as 11kb), but the half-rounding keeps\nrounding up at each stage.\n\nThat's a general property of rounding -- you need to be very careful\nwhen rounding more than once, since otherwise errors will propagate.\nC.f. 4083f445c0, which removed a double-round in numeric sqrt().\n\nTo be clear, I'm not saying that the current code half-rounds more\nthan once, just that it reads as if it does.\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 6 Jul 2021 13:50:58 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "\nDavid Rowley writes:\n\n> On Mon, 5 Jul 2021 at 20:00, David Rowley <dgrowleyml@gmail.com> wrote:\n>> I don't really like the fact that I had to add the doHalfRound field\n>> to get the same rounding behaviour as the original functions. I'm\n>> wondering if it would just be too clever just to track how many bits\n>> we've shifted right by in pg_size_pretty* and compare that to the\n>> value of multishift for the current unit and do appropriate rounding\n>> to get us to the value of multishift. In theory, we could just keep\n>> calling the half_rounded macro until we make it to the multishift\n>> value. My current thoughts are that it's unlikely that anyone would\n>> twiddle with the size_pretty_units array in such a way for the extra\n>> code to be worth it. Maybe someone else feels differently.\n>\n> I made another pass over this and ended up removing the doHalfRound\n> field in favour of just doing rounding based on the previous\n> bitshifts.\n>\n> I did a few other tidy ups and I think it's a useful patch as it\n> reduces the amount of code a bit and makes it dead simple to add new\n> units in the future. Most importantly it'll help keep pg_size_pretty,\n> pg_size_pretty_numeric and pg_size_bytes all in sync in regards to\n> what units they support.\n>\n> Does anyone want to have a look over this? If not, I plan to push it\n> in the next day or so.\n>\n> (I'm not sure why pgindent removed the space between != and NULL, but\n> it did, so I left it.)\n>\n> David\n\nI like the approach you took here; much cleaner to have one table for all of the individual\ncodepaths. Testing worked as expected; if we do decide to expand the units table there will be a\nfew additional changes (most significantly, the return value of `pg_size_bytes()` will need to switch\nto `numeric`).\n\nThanks,\n\nDavid\n\n\n", "msg_date": "Tue, 06 Jul 2021 09:46:27 -0500", "msg_from": "David Christensen <david.christensen@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Does anyone want to have a look over this? If not, I plan to push it\n> in the next day or so.\n\nMinor nit: use \"const char *text\" in the struct declaration, so\nthat all of the static data can be placed in fixed storage.\n\n> (I'm not sure why pgindent removed the space between != and NULL, but\n> it did, so I left it.)\n\nIt did that because \"text\" is a typedef name, so it's a bit confused\nabout whether the statement is really a declaration. Personally I'd\nhave used \"name\" or something like that for that field, anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 06 Jul 2021 10:54:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "On Wed, 7 Jul 2021 at 00:51, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> 10. half_round(21) == 11 :: 20 >> 1 = 10\n>\n> The correct result should be 10 (it would be very odd to claim that\n> 10241 bytes should be displayed as 11kb), but the half-rounding keeps\n> rounding up at each stage.\n>\n> That's a general property of rounding -- you need to be very careful\n> when rounding more than once, since otherwise errors will propagate.\n> C.f. 4083f445c0, which removed a double-round in numeric sqrt().\n\nThanks. I've adjusted the patch to re-add the round bool flag and get\nrid of the rightShift field. I'm now calculating how many bits to\nshift right by based on the difference between the unitbits of the\ncurrent and next unit then taking 1 bit less if the next unit does\nhalf rounding and the current one does not, or adding an extra bit on\nin the opposite case.\n\nI'll post another patch shortly.\n\nDavid\n\n\n", "msg_date": "Wed, 7 Jul 2021 14:44:51 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "On Wed, 7 Jul 2021 at 02:54, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Minor nit: use \"const char *text\" in the struct declaration, so\n> that all of the static data can be placed in fixed storage.\n\nThanks for pointing that out.\n\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > (I'm not sure why pgindent removed the space between != and NULL, but\n> > it did, so I left it.)\n>\n> It did that because \"text\" is a typedef name, so it's a bit confused\n> about whether the statement is really a declaration. Personally I'd\n> have used \"name\" or something like that for that field, anyway.\n\nI should have thought of that. Thanks for highlighting it. I've\nrenamed the field.\n\nUpdated patch attached.\n\nDavid", "msg_date": "Wed, 7 Jul 2021 14:47:22 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "On Wed, 7 Jul 2021 at 02:46, David Christensen\n<david.christensen@crunchydata.com> wrote:\n> if we do decide to expand the units table there will be a\n> few additional changes (most significantly, the return value of `pg_size_bytes()` will need to switch\n> to `numeric`).\n\nI wonder if it's worth changing pg_size_bytes() to return NUMERIC\nregardless of if we add any additional units or not.\n\nWould you like to create 2 patches, one to change the return type and\nanother to add the new units, both based on top of the v2 patch I sent\nearlier?\n\nDavid\n\n\n", "msg_date": "Wed, 7 Jul 2021 16:37:43 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "On Wed, 7 Jul 2021 at 03:47, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> Updated patch attached.\n>\n\nHmm, this looked easy, but...\n\nIt occurred to me that there ought to be regression tests for the edge\ncases where it steps from one unit to the next. So, in the style of\nthe existing regression tests, I tried the following:\n\nSELECT size, pg_size_pretty(size), pg_size_pretty(-1 * size) FROM\n (VALUES (10239::bigint), (10240::bigint),\n (10485247::bigint), (10485248::bigint),\n (10736893951::bigint), (10736893952::bigint),\n (10994579406847::bigint), (10994579406848::bigint),\n (11258449312612351::bigint), (11258449312612352::bigint)) x(size);\n\n size | pg_size_pretty | pg_size_pretty\n-------------------+----------------+----------------\n 10239 | 10239 bytes | -10239 bytes\n 10240 | 10 kB | -10 kB\n 10485247 | 10239 kB | -10 MB\n 10485248 | 10 MB | -10 MB\n 10736893951 | 10239 MB | -10 GB\n 10736893952 | 10 GB | -10 GB\n 10994579406847 | 10239 GB | -10 TB\n 10994579406848 | 10 TB | -10 TB\n 11258449312612351 | 10239 TB | -10239 TB\n 11258449312612352 | 10240 TB | -10239 TB\n(10 rows)\n\nSELECT size, pg_size_pretty(size), pg_size_pretty(-1 * size) FROM\n (VALUES (10239::numeric), (10240::numeric),\n (10485247::numeric), (10485248::numeric),\n (10736893951::numeric), (10736893952::numeric),\n (10994579406847::numeric), (10994579406848::numeric),\n (11258449312612351::numeric), (11258449312612352::numeric)) x(size);\n\n size | pg_size_pretty | pg_size_pretty\n-------------------+----------------+----------------\n 10239 | 10239 bytes | -10239 bytes\n 10240 | 10 kB | -10 kB\n 10485247 | 10239 kB | -10239 kB\n 10485248 | 10 MB | -10 MB\n 10736893951 | 10239 MB | -10239 MB\n 10736893952 | 10 GB | -10 GB\n 10994579406847 | 10239 GB | -10239 GB\n 10994579406848 | 10 TB | -10 TB\n 11258449312612351 | 10239 TB | -10239 TB\n 11258449312612352 | 10240 TB | -10240 TB\n(10 rows)\n\nUnder the assumption that what we're trying to achieve here is\nschoolbook rounding (ties away from zero), the numeric results are\ncorrect and the bigint results are wrong.\n\nThe reason is that bit shifting isn't the same as division for\nnegative numbers, since bit shifting rounds towards negative infinity\nwhereas division rounds towards zero (truncates), which is what I\nthink we really need.\n\nRegards,\nDean\n\n\n", "msg_date": "Wed, 7 Jul 2021 14:32:08 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "David Rowley writes:\n\n> On Wed, 7 Jul 2021 at 02:46, David Christensen\n> <david.christensen@crunchydata.com> wrote:\n>> if we do decide to expand the units table there will be a\n>> few additional changes (most significantly, the return value of `pg_size_bytes()` will need to switch\n>> to `numeric`).\n>\n> I wonder if it's worth changing pg_size_bytes() to return NUMERIC\n> regardless of if we add any additional units or not.\n>\n> Would you like to create 2 patches, one to change the return type and\n> another to add the new units, both based on top of the v2 patch I sent\n> earlier?\n>\n> David\n\nEnclosed is the patch to change the return type to numeric, as well as one for expanding units to\nadd PB and EB.\n\nIf we decide to expand further, the current implementation will need to change, as\nZB and YB have 70 and 80 bits needing to be shifted accordingly, so int64 isn't enough to hold\nit. (I fixed this particular issue in the original version of this patch, so there is at least a\nblueprint of how to fix.)\n\nI figured that PB and EB are probably good enough additions at this point, so we can debate whether\nto add the others.\n\nBest,\n\nDavid", "msg_date": "Wed, 07 Jul 2021 12:44:55 -0500", "msg_from": "David Christensen <david.christensen@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "David Christensen <david.christensen@crunchydata.com> writes:\n> Enclosed is the patch to change the return type to numeric, as well as one for expanding units to\n> add PB and EB.\n\nCan we really get away with changing the return type? That would\nby no stretch of the imagination be free; one could expect breakage\nof a few user views, for example.\n\nIndependently of that, I'm pretty much -1 on going further than PB.\nEven if the additional abbreviations you mention are actually recognized\nstandards, I think not that many people are familiar with them, and such\ninput is way more likely to be a typo than intended data.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 07 Jul 2021 15:31:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "\nTom Lane writes:\n\n> David Christensen <david.christensen@crunchydata.com> writes:\n>> Enclosed is the patch to change the return type to numeric, as well as one for expanding units to\n>> add PB and EB.\n>\n> Can we really get away with changing the return type? That would\n> by no stretch of the imagination be free; one could expect breakage\n> of a few user views, for example.\n\nHmm, that's a good point, and we can't really make the return type polymorphic (being as there isn't\na source type of the given return value).\n\n> Independently of that, I'm pretty much -1 on going further than PB.\n> Even if the additional abbreviations you mention are actually recognized\n> standards, I think not that many people are familiar with them, and such\n> input is way more likely to be a typo than intended data.\n\nIf we do go ahead and restrict the expansion to just PB, the return value of pg_size_bytes() would\nstill support up to 8192 PB before running into range limitations. I assume it's not worth creating\na pg_size_bytes_numeric() with the full range of supported units, but that is presumably an option\nas well.\n\nBest,\n\nDavid\n\n\n", "msg_date": "Wed, 07 Jul 2021 14:56:46 -0500", "msg_from": "David Christensen <david.christensen@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "On Thu, 8 Jul 2021 at 01:32, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> Hmm, this looked easy, but...\n>\n> It occurred to me that there ought to be regression tests for the edge\n> cases where it steps from one unit to the next. So, in the style of\n> the existing regression tests, I tried the following:\n>\n> SELECT size, pg_size_pretty(size), pg_size_pretty(-1 * size) FROM\n> (VALUES (10239::bigint), (10240::bigint),\n> (10485247::bigint), (10485248::bigint),\n> (10736893951::bigint), (10736893952::bigint),\n> (10994579406847::bigint), (10994579406848::bigint),\n> (11258449312612351::bigint), (11258449312612352::bigint)) x(size);\n\n\n> 11258449312612352 | 10240 TB | -10239 TB\n\nHmm, yeah, I noticed that pg_size_pretty(bigint) vs\npg_size_pretty(numeric) didn't always match when I was testing this\npatch, but I was more focused on having my results matching the\nunpatched version than I was with making sure bigint and numeric\nmatched.\n\nI imagine this must date back to 8a1fab36ab. Do you feel like it's\nsomething this patch should fix? I was mostly hoping to keep this\npatch about rewriting the code to both make it easier to add new units\nand also to make it easier to keep all 3 functions in sync.\n\nIt feels like if we're going to fix this negative rounding thing then\nwe should maybe do it and backpatch a fix then rebase this work on top\nof that.\n\nWhat are your thoughts?\n\nDavid\n\n\n", "msg_date": "Thu, 8 Jul 2021 13:31:23 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "On Thu, 8 Jul 2021 at 13:31, David Rowley <dgrowleyml@gmail.com> wrote:\n> It feels like if we're going to fix this negative rounding thing then\n> we should maybe do it and backpatch a fix then rebase this work on top\n> of that.\n\nHere's a patch which I believe makes pg_size_pretty() and\npg_size_pretty_numeric() match in regards to negative values.\n\nMaybe this plus your regression test would be ok to back-patch?\n\nDavid", "msg_date": "Thu, 8 Jul 2021 16:29:58 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "On Thu, 8 Jul 2021 at 05:30, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 8 Jul 2021 at 13:31, David Rowley <dgrowleyml@gmail.com> wrote:\n> > It feels like if we're going to fix this negative rounding thing then\n> > we should maybe do it and backpatch a fix then rebase this work on top\n> > of that.\n\nYes, that was my thinking too.\n\n> Here's a patch which I believe makes pg_size_pretty() and\n> pg_size_pretty_numeric() match in regards to negative values.\n\nLGTM, except I think it's worth also making the numeric code not refer\nto bit shifting either.\n\n> Maybe this plus your regression test would be ok to back-patch?\n\n+1\n\nHere's an update with matching updates to the numeric code, plus the\nregression tests.\n\nRegards,\nDean", "msg_date": "Thu, 8 Jul 2021 09:23:03 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "On Thu, 8 Jul 2021 at 07:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Christensen <david.christensen@crunchydata.com> writes:\n> > Enclosed is the patch to change the return type to numeric, as well as one for expanding units to\n> > add PB and EB.\n>\n> Can we really get away with changing the return type? That would\n> by no stretch of the imagination be free; one could expect breakage\n> of a few user views, for example.\n\nThat's a good point. We should probably leave it alone then. I had\nhad it in mind that it might be ok since we did this for extract() in\n14. At least we have date_part() as a backup there. I'm fine to leave\nthe return value of pg_size_bytes as-is.\n\n> Independently of that, I'm pretty much -1 on going further than PB.\n> Even if the additional abbreviations you mention are actually recognized\n> standards, I think not that many people are familiar with them, and such\n> input is way more likely to be a typo than intended data.\n\nI'm fine with that too. In [1] I mentioned my concerns with adding\nall the defined units up to Yottabyte. David reduced that down to just\nexabytes, but I think if we're keeping pg_size_bytes returning bigint\nthen drawing the line at PB seems ok to me. Anything more than\npg_size_bytes('8 EB') would overflow.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvp9ym+RSQNGoSRPjH+j6TJ1tFBhfT+JoLFf_RbZq1EszQ@mail.gmail.com\n\n\n", "msg_date": "Fri, 9 Jul 2021 00:49:37 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": " tOn Thu, 8 Jul 2021 at 20:23, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> > On Thu, 8 Jul 2021 at 13:31, David Rowley <dgrowleyml@gmail.com> wrote:\n> > Here's a patch which I believe makes pg_size_pretty() and\n> > pg_size_pretty_numeric() match in regards to negative values.\n>\n> LGTM, except I think it's worth also making the numeric code not refer\n> to bit shifting either.\n>\n> > Maybe this plus your regression test would be ok to back-patch?\n>\n> +1\n>\n> Here's an update with matching updates to the numeric code, plus the\n> regression tests.\n\nLooks good.\n\nI gave it a bit of exercise by running pgbench and calling this procedure:\n\nCREATE OR REPLACE PROCEDURE public.test_size_pretty2()\n LANGUAGE plpgsql\nAS $procedure$\ndeclare b bigint;\nbegin\n FOR i IN 1..1000 LOOP\n b := 0 - (random() * 9223372036854775807)::bigint;\n if pg_size_pretty(b) <> pg_size_pretty(b::numeric) then\n raise notice '%. % != %', b,\npg_size_pretty(b), pg_size_pretty(b::numeric);\n end if;\n END LOOP;\nEND;\n$procedure$\n\nIt ran 8526956 times, so with the loop that's 8.5 billion random\nnumbers. No variations between the two functions. I got the same\nafter removing the 0 - to test positive numbers.\n\nIf you like, I can push this in my morning, or if you'd rather do it\nyourself, please go ahead.\n\nDavid\n\n\n", "msg_date": "Fri, 9 Jul 2021 01:38:18 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "On Thu, 8 Jul 2021 at 14:38, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> I gave it a bit of exercise by running pgbench and calling this procedure:\n>\n> It ran 8526956 times, so with the loop that's 8.5 billion random\n> numbers. No variations between the two functions. I got the same\n> after removing the 0 - to test positive numbers.\n\nWow, that's a lot of testing! I just tried a few hand-picked edge cases.\n\n> If you like, I can push this in my morning, or if you'd rather do it\n> yourself, please go ahead.\n\nNo, I didn't get as much time as I thought I would today, so please go ahead.\n\nRegards,\nDean\n\n\n", "msg_date": "Thu, 8 Jul 2021 18:38:22 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" }, { "msg_contents": "On Thu, 8 Jul 2021 at 05:44, David Christensen\n<david.christensen@crunchydata.com> wrote:\n> Enclosed is the patch to change the return type to numeric, as well as one for expanding units to\n> add PB and EB.\n\nI ended up not changing the return type of pg_size_bytes().\n\n> I figured that PB and EB are probably good enough additions at this point, so we can debate whether\n> to add the others.\n\nPer Tom's concern both with changing the return type of\npg_size_bytes() and his and my concern about going too far adding more\nunits, I've adjusted your patch to only add petabytes and pushed it.\nThe maximum range of BIGINT is only 8 exabytes, so the BIGINT version\nwould never show in exabytes anyway. It would still be measuring in\npetabytes at the 64-bit range limit.\n\nAfter a bit of searching, I found reports that the estimated entire\nstored digital data on Earth as of 2020 to be 59 zettabytes, or about\n0.06 yottabytes. I feel like we've gone far enough by adding\npetabytes today. Maybe that's worth revisiting in a couple of storage\ngenerations. After we're done there, we can start working on the LSN\nwraparound code.\n\nDavid\n\n\n", "msg_date": "Fri, 9 Jul 2021 19:15:08 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] expand the units that pg_size_pretty supports on output" } ]
[ { "msg_contents": "Hello hackers,\n\nI'm trying to understand what is happening in the following bug report:\nhttps://bugzilla.redhat.com/show_bug.cgi?id=1935301\n\nThe upgrade process makes it a bit more difficult, but it seems to boil \ndown to this problem -- even when pg_ctl gets clear guidance where to \nfind datadir using -D option on the command-line, it forgets this \nguidance once finding data_directory option in the postgresql.conf.\n\nIs this the expected behavior actually? Or is the behavior in this case \n(i.e. when the same option is specified on the cmd-line and also in the \ndatadir, with different values) defined at all?\n\n(couldn't find it in the doc and even google does not return me anything \nuseful)\n\nThanks for any tips,\nHonza\n\n\n\n", "msg_date": "Wed, 14 Apr 2021 18:21:58 +0200", "msg_from": "Honza Horak <hhorak@redhat.com>", "msg_from_op": true, "msg_subject": "Options given both on cmd-line and in the config with different\n values" }, { "msg_contents": "Honza Horak <hhorak@redhat.com> writes:\n> I'm trying to understand what is happening in the following bug report:\n> https://bugzilla.redhat.com/show_bug.cgi?id=1935301\n\n> The upgrade process makes it a bit more difficult, but it seems to boil \n> down to this problem -- even when pg_ctl gets clear guidance where to \n> find datadir using -D option on the command-line, it forgets this \n> guidance once finding data_directory option in the postgresql.conf.\n\n> Is this the expected behavior actually?\n\nThe rule actually is that -D on the command line says where to find\nthe configuration file. While -D is then also the default for where\nto find the data directory, the config file can override that by\ngiving data_directory explicitly.\n\nThis is intended to support situations where the config file is kept\noutside the data directory for management reasons. If you are not\nactively doing that, I'd recommend *not* setting data_directory\nexplicitly in the file.\n\nWhile I've not studied the bug report carefully, it sounds like the\nupdate process you're using involves copying the old config file\nacross verbatim. You'd at minimum need to filter out data_directory\nand related settings to make that safe.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 14 Apr 2021 13:55:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Options given both on cmd-line and in the config with different\n values" }, { "msg_contents": "On 4/14/21 7:55 PM, Tom Lane wrote:\n> Honza Horak <hhorak@redhat.com> writes:\n>> I'm trying to understand what is happening in the following bug report:\n>> https://bugzilla.redhat.com/show_bug.cgi?id=1935301\n> \n>> The upgrade process makes it a bit more difficult, but it seems to boil\n>> down to this problem -- even when pg_ctl gets clear guidance where to\n>> find datadir using -D option on the command-line, it forgets this\n>> guidance once finding data_directory option in the postgresql.conf.\n> \n>> Is this the expected behavior actually?\n> \n> The rule actually is that -D on the command line says where to find\n> the configuration file. While -D is then also the default for where\n> to find the data directory, the config file can override that by\n> giving data_directory explicitly.\n> \n> This is intended to support situations where the config file is kept\n> outside the data directory for management reasons. If you are not\n> actively doing that, I'd recommend *not* setting data_directory\n> explicitly in the file.\n> \n> While I've not studied the bug report carefully, it sounds like the\n> update process you're using involves copying the old config file\n> across verbatim. You'd at minimum need to filter out data_directory\n> and related settings to make that safe.\n\nThanks for explaining, it makes perfect sense. You're right that there \nis some dbdata directory moving involved, so in that case removing \ndata_directory option from postgresql.conf makes sense.\n\nThanks,\nHonza\n\n\n\n", "msg_date": "Thu, 15 Apr 2021 18:29:07 +0200", "msg_from": "Honza Horak <hhorak@redhat.com>", "msg_from_op": true, "msg_subject": "Re: Options given both on cmd-line and in the config with different\n values" } ]
[ { "msg_contents": "I noticed some broken-looking logic in recordMultipleDependencies\nconcerning how it records collation versions. It was a bit harder\nthan I expected to demonstrate the bugs, but I eventually succeeded\nwith\n\nu8=# create function foo(varchar) returns bool language sql return false;\nCREATE FUNCTION\nu8=# create collation mycoll from \"en_US\";\nCREATE COLLATION\nu8=# CREATE DOMAIN d4 AS character varying(3) COLLATE \"aa_DJ\"\n CONSTRAINT yes_or_no_check CHECK (value = 'YES' collate mycoll or foo(value));\nCREATE DOMAIN\nu8=# select objid, pg_describe_object(classid,objid,objsubid) as obj, pg_describe_object(refclassid,refobjid,refobjsubid) as ref, deptype, refobjversion from pg_depend where objid = 'd4'::regtype;\n objid | obj | ref | deptype | refobjversion \n-------+---------+-------------------+---------+---------------\n 37421 | type d4 | schema public | n | \n 37421 | type d4 | collation \"aa_DJ\" | n | \n(2 rows)\n\nu8=# select objid, pg_describe_object(classid,objid,objsubid) as obj, pg_describe_object(refclassid,refobjid,refobjsubid) as ref, deptype, refobjversion from pg_depend where refobjid = 'd4'::regtype;\n objid | obj | ref | deptype | refobjversion \n-------+----------------------------+---------+---------+---------------\n 37420 | type d4[] | type d4 | i | \n 37422 | constraint yes_or_no_check | type d4 | a | \n(2 rows)\n\nu8=# select objid, pg_describe_object(classid,objid,objsubid) as obj, pg_describe_object(refclassid,refobjid,refobjsubid) as ref, deptype, refobjversion from pg_depend where objid = 37422;\n objid | obj | ref | deptype | refobjversion \n-------+----------------------------+---------------------------------+---------+---------------\n 37422 | constraint yes_or_no_check | type d4 | a | \n 37422 | constraint yes_or_no_check | collation mycoll | n | 2.28\n 37422 | constraint yes_or_no_check | function foo(character varying) | n | 2.28\n 37422 | constraint yes_or_no_check | collation \"default\" | n | \n(4 rows)\n\n(This is in a glibc-based build, with C as the database's default\ncollation.)\n\nOne question here is whether it's correct that the domain's dependency\non collation \"aa_DJ\" is unversioned. Maybe that's intentional, but it\nseems worth asking.\n\nAnyway, there are two pretty obvious bugs in the dependencies for the\ndomain's CHECK constraint: the version for collation mycoll leaks\ninto the entry for function foo, and an entirely useless (because\nunversioned) dependency is recorded on the default collation.\n\n... well, it's almost entirely useless. If we fix things to not do that\n(as per patch 0001 below), the results of the create_index regression\ntest become unstable, because there's two queries that inquire into the\ndependencies of indexes, and their results change depending on whether\nthe default collation has a version or not. I'd be inclined to just\ntake out the portions of that test that depend on that question, but\nmaybe somebody will complain that there's a loss of useful coverage.\nI don't agree, but maybe I'll be overruled.\n\nIf we do feel we need to stay bug-compatible with that behavior, then\nthe alternate 0002 patch just fixes the version-leakage-across-entries\nproblem, while still removing the unnecessary assumption that C, POSIX,\nand DEFAULT are the only pinned collations.\n\n(To be clear: 0002 passes check-world as-is, while 0001 is not\ncommittable without some regression-test fiddling.)\n\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 14 Apr 2021 13:18:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "On Wed, Apr 14, 2021 at 01:18:07PM -0400, Tom Lane wrote:\n> \n> One question here is whether it's correct that the domain's dependency\n> on collation \"aa_DJ\" is unversioned. Maybe that's intentional, but it\n> seems worth asking.\n\nThis is intentional I think, we should record collation version only for object\nthat might break if the collation version is updated. So creating an index on\nthat domain would record the collation version.\n\n> Anyway, there are two pretty obvious bugs in the dependencies for the\n> domain's CHECK constraint: the version for collation mycoll leaks\n> into the entry for function foo, and an entirely useless (because\n> unversioned) dependency is recorded on the default collation.\n\nAgreed.\n\n> (To be clear: 0002 passes check-world as-is, while 0001 is not\n> committable without some regression-test fiddling.)\n\nI'm probably missing something obvious but both 0001 and 0002 pass check-world\nfor me, on a glibc box and --with-icu.\n\n> Thoughts?\n\nI think this is an open item, so I added one for now.\n\n\n", "msg_date": "Thu, 15 Apr 2021 18:56:47 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Wed, Apr 14, 2021 at 01:18:07PM -0400, Tom Lane wrote:\n>> (To be clear: 0002 passes check-world as-is, while 0001 is not\n>> committable without some regression-test fiddling.)\n\n> I'm probably missing something obvious but both 0001 and 0002 pass check-world\n> for me, on a glibc box and --with-icu.\n\n0001 fails for me :-(. I think that requires default collation to be C.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 15 Apr 2021 10:06:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "On Thu, Apr 15, 2021 at 10:06:24AM -0400, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Wed, Apr 14, 2021 at 01:18:07PM -0400, Tom Lane wrote:\n> >> (To be clear: 0002 passes check-world as-is, while 0001 is not\n> >> committable without some regression-test fiddling.)\n> \n> > I'm probably missing something obvious but both 0001 and 0002 pass check-world\n> > for me, on a glibc box and --with-icu.\n> \n> 0001 fails for me :-(. I think that requires default collation to be C.\n\nOh right, adding --no-locale to the regress opts I see that create_index is\nfailing, and that's not the one I was expecting.\n\nWe could change create_index test to create c2 with a C collation, in order to\ntest that we don't track dependency on unversioned locales, and add an extra\ntest in collate.linux.utf8 to check that we do track a dependency on the\ndefault collation as this test isn't run in the --no-locale case. The only\ncase not tested would be default unversioned collation, but I'm not sure where\nto properly test that. Maybe a short leading test in collate.linux.utf8 that\nwould be run on linux in that case (when getdatabaseencoding() != 'UTF8')? It\nwould require an extra alternate file but it wouldn't cause too much\nmaintenance problem as there should be only one test.\n\n\n", "msg_date": "Fri, 16 Apr 2021 10:56:05 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Thu, Apr 15, 2021 at 10:06:24AM -0400, Tom Lane wrote:\n>> 0001 fails for me :-(. I think that requires default collation to be C.\n\n> Oh right, adding --no-locale to the regress opts I see that create_index is\n> failing, and that's not the one I was expecting.\n\n> We could change create_index test to create c2 with a C collation, in order to\n> test that we don't track dependency on unversioned locales, and add an extra\n> test in collate.linux.utf8 to check that we do track a dependency on the\n> default collation as this test isn't run in the --no-locale case. The only\n> case not tested would be default unversioned collation, but I'm not sure where\n> to properly test that. Maybe a short leading test in collate.linux.utf8 that\n> would be run on linux in that case (when getdatabaseencoding() != 'UTF8')? It\n> would require an extra alternate file but it wouldn't cause too much\n> maintenance problem as there should be only one test.\n\nSince the proposed patch removes the dependency code's special-case\nhandling of the default collation, I don't feel like we need to jump\nthrough hoops to prove that the default collation is tracked the\nsame as other collations. A regression test with alternative outputs\nis a significant ongoing maintenance burden, and I do not see where\nwe're getting a commensurate improvement in test coverage. Especially\nsince, AFAICS, the two alternative outputs would essentially have to\naccept both the \"it works\" and \"it doesn't work\" outcomes.\n\nSo I propose that we do 0001 below, which is my first patch plus your\nsuggestion about fixing up create_index.sql. This passes check-world\nfor me under both C and en_US.utf8 prevailing locales.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 16 Apr 2021 10:03:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "On Fri, Apr 16, 2021 at 10:03:42AM -0400, Tom Lane wrote:\n> \n> Since the proposed patch removes the dependency code's special-case\n> handling of the default collation, I don't feel like we need to jump\n> through hoops to prove that the default collation is tracked the\n> same as other collations. A regression test with alternative outputs\n> is a significant ongoing maintenance burden, and I do not see where\n> we're getting a commensurate improvement in test coverage. Especially\n> since, AFAICS, the two alternative outputs would essentially have to\n> accept both the \"it works\" and \"it doesn't work\" outcomes.\n\nFine by me, I was mentioning those if we wanted to keep some extra coverage for\nthat by I agree it doesn't add much value.\n\n> So I propose that we do 0001 below, which is my first patch plus your\n> suggestion about fixing up create_index.sql. This passes check-world\n> for me under both C and en_US.utf8 prevailing locales.\n\nThat's what I ended up with too, so LGTM!\n\n\n", "msg_date": "Fri, 16 Apr 2021 23:55:35 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Fri, Apr 16, 2021 at 10:03:42AM -0400, Tom Lane wrote:\n>> So I propose that we do 0001 below, which is my first patch plus your\n>> suggestion about fixing up create_index.sql. This passes check-world\n>> for me under both C and en_US.utf8 prevailing locales.\n\n> That's what I ended up with too, so LGTM!\n\nPushed, thanks for review! (and I'll update the open items list in a\nsec)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Apr 2021 12:27:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "I wrote:\n>> That's what I ended up with too, so LGTM!\n\n> Pushed, thanks for review! (and I'll update the open items list in a\n> sec)\n\n... or maybe not just yet. Andres' buildfarm critters seem to have\na different opinion than my machine about what the output of\ncollate.icu.utf8 ought to be. I wonder what the prevailing LANG\nsetting is for them, and which ICU version they're using.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Apr 2021 12:55:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "Hi,\n\nOn 2021-04-16 12:55:28 -0400, Tom Lane wrote:\n> I wrote:\n> >> That's what I ended up with too, so LGTM!\n> \n> > Pushed, thanks for review! (and I'll update the open items list in a\n> > sec)\n> \n> ... or maybe not just yet. Andres' buildfarm critters seem to have\n> a different opinion than my machine about what the output of\n> collate.icu.utf8 ought to be. I wonder what the prevailing LANG\n> setting is for them, and which ICU version they're using.\n\nandres@andres-pg-buildfarm-valgrind:~/src/pgbuildfarm-client-stock$ grep calliph *.conf\nbuild-farm-copyparse.conf: animal => \"calliphoridae\",\nbuild-farm-copyparse.conf: build_root => '/mnt/resource/andres/bf/calliphoridae',\n\nandres@andres-pg-buildfarm-valgrind:~/src/pgbuildfarm-client-stock$ dpkg -l|grep icu\nii icu-devtools 67.1-6 amd64 Development utilities for International Components for Unicode\nii libicu-dev:amd64 67.1-6 amd64 Development files for International Components for Unicode\nii libicu67:amd64 67.1-6 amd64 International Components for Unicode\n\nandres@andres-pg-buildfarm-valgrind:~/src/pgbuildfarm-client-stock$ locale\nLANG=C.UTF-8\nLANGUAGE=\nLC_CTYPE=\"C.UTF-8\"\nLC_NUMERIC=\"C.UTF-8\"\nLC_TIME=\"C.UTF-8\"\nLC_COLLATE=\"C.UTF-8\"\nLC_MONETARY=\"C.UTF-8\"\nLC_MESSAGES=\"C.UTF-8\"\nLC_PAPER=\"C.UTF-8\"\nLC_NAME=\"C.UTF-8\"\nLC_ADDRESS=\"C.UTF-8\"\nLC_TELEPHONE=\"C.UTF-8\"\nLC_MEASUREMENT=\"C.UTF-8\"\nLC_IDENTIFICATION=\"C.UTF-8\"\nLC_ALL=\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 16 Apr 2021 10:04:16 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "I wrote:\n> ... or maybe not just yet. Andres' buildfarm critters seem to have\n> a different opinion than my machine about what the output of\n> collate.icu.utf8 ought to be. I wonder what the prevailing LANG\n> setting is for them, and which ICU version they're using.\n\nOh, I bet it's \"C.utf8\", because I can reproduce the failure with that.\nThis crystallizes a nagging feeling I'd had that you were misdescribing\nthe collate.icu.utf8 test as not being run under --no-locale. Actually,\nit's only skipped if the encoding isn't UTF8, not the same thing.\nI think we need to remove the default-collation cases from that test too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Apr 2021 13:07:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-04-16 12:55:28 -0400, Tom Lane wrote:\n>> ... or maybe not just yet. Andres' buildfarm critters seem to have\n>> a different opinion than my machine about what the output of\n>> collate.icu.utf8 ought to be. I wonder what the prevailing LANG\n>> setting is for them, and which ICU version they're using.\n\n> LANG=C.UTF-8\n\nI'd guessed that shortly later, but thanks for confirming.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Apr 2021 13:13:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "I wrote:\n> Oh, I bet it's \"C.utf8\", because I can reproduce the failure with that.\n> This crystallizes a nagging feeling I'd had that you were misdescribing\n> the collate.icu.utf8 test as not being run under --no-locale. Actually,\n> it's only skipped if the encoding isn't UTF8, not the same thing.\n> I think we need to remove the default-collation cases from that test too.\n\nHmm ... this is more subtle than it seemed.\n\nI tried to figure out where the default-collation dependencies were coming\nfrom, and it's quite non-obvious, at least for some of them. Observe:\n\nu8de=# create table t1 (f1 text collate \"fr_FR\");\nCREATE TABLE\nu8de=# create index on t1(f1) where f1 > 'foo';\nCREATE INDEX\nu8de=# SELECT objid::regclass, refobjid::regcollation, refobjversion\nFROM pg_depend d\nLEFT JOIN pg_class c ON c.oid = d.objid\nWHERE refclassid = 'pg_collation'::regclass\nAND coalesce(relkind, 'i') = 'i'\nAND relname LIKE 't1_%';\n objid | refobjid | refobjversion \n-----------+-----------+---------------\n t1_f1_idx | \"fr_FR\" | 2.28\n t1_f1_idx | \"fr_FR\" | 2.28\n t1_f1_idx | \"default\" | 2.28\n(3 rows)\n\n(The \"default\" item doesn't show up if default collation is C,\nwhich is what's causing the buildfarm instability.)\n\nNow, it certainly looks like that index definition ought to only\nhave fr_FR dependencies. I dug into it and discovered that the\nreason we're coming up with a dependency on \"default\" is that\nthe WHERE clause looks like\n\n\t {OPEXPR \n\t :opno 666 \n\t :opfuncid 742 \n\t :opresulttype 16 \n\t :opretset false \n\t :opcollid 0 \n\t :inputcollid 14484 \n\t :args (\n\t {VAR \n\t :varno 1 \n\t :varattno 1 \n\t :vartype 25 \n\t :vartypmod -1 \n\t :varcollid 14484 \n\t :varlevelsup 0 \n\t :varnosyn 1 \n\t :varattnosyn 1 \n\t :location 23\n\t }\n\t {CONST \n\t :consttype 25 \n\t :consttypmod -1 \n\t :constcollid 100 \n\t :constlen -1 \n\t :constbyval false \n\t :constisnull false \n\t :location 28 \n\t :constvalue 7 [ 28 0 0 0 102 111 111 ]\n\t }\n\t )\n\t :location 26\n\t }\n\nSo sure enough, the comparison operator's inputcollid is\nfr_FR, but the 'foo' constant has constcollid = \"default\".\nThat will have exactly zero impact on the semantics of the\nexpression, but dependency.c doesn't realize that and\nreports it as a dependency anyway.\n\nI feel like this is telling us that there's a fundamental\nmisunderstanding in find_expr_references_walker about which\ncollation dependencies to report. It's reporting all the\nleaf-node collations, and ignoring the ones that actually\ncount semantically, that is the inputcollid fields of\nfunction and operator nodes.\n\nNot sure what's the best thing to do here. Redesigning\nthis post-feature-freeze doesn't seem terribly appetizing,\nbut on the other hand, this index collation recording\nfeature has put a premium on not overstating the collation\ndependencies of an expression. We don't want to tell users\nthat an index is broken when it isn't really.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Apr 2021 13:45:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "I wrote:\n> I feel like this is telling us that there's a fundamental\n> misunderstanding in find_expr_references_walker about which\n> collation dependencies to report. It's reporting all the\n> leaf-node collations, and ignoring the ones that actually\n> count semantically, that is the inputcollid fields of\n> function and operator nodes.\n> Not sure what's the best thing to do here. Redesigning\n> this post-feature-freeze doesn't seem terribly appetizing,\n> but on the other hand, this index collation recording\n> feature has put a premium on not overstating the collation\n> dependencies of an expression. We don't want to tell users\n> that an index is broken when it isn't really.\n\nI felt less hesitant to modify find_expr_references_walker's\nbehavior w.r.t. collations after realizing that most of it\nwas not of long standing, but came in with 257836a75.\nSo here's a draft patch that redesigns it as suggested above.\nAlong the way I discovered that GetTypeCollations was quite\nbroken for ranges and arrays, so this fixes that too.\n\nPer the changes in collate.icu.utf8.out, this gets rid of\na lot of imaginary collation dependencies, but it also gets\nrid of some arguably-real ones. In particular, calls of\nrecord_eq and its siblings will be considered not to have\nany collation dependencies, although we know that internally\nthose will look up per-column collations of their input types.\nWe could imagine special-casing record_eq etc here, but that\nsure seems like a hack.\n\nI\"m starting to have a bad feeling about 257836a75 overall.\nAs I think I've complained before, I do not like anything about\nwhat it's done to pg_depend; it's forcing that relation to serve\ntwo masters, neither one well. We now see that the same remark\napplies to find_expr_references(), because the semantics of\n\"which collations does this expression's behavior depend on\" aren't\nidentical to \"which collations need to be recorded as direct\ndependencies of this expression\", especially not if you'd prefer\nto minimize either list. (Which is important.) Moreover, for all\nthe complexity it's introducing, it's next door to useless for\nglibc collations --- we might as well tell people \"reindex\neverything when your glibc version changes\", which could be done\nwith a heck of a lot less infrastructure. The situation on Windows\nlooks pretty user-unfriendly as well, per the other thread.\n\nSo I wonder if, rather than continuing to pursue this right now,\nwe shouldn't revert 257836a75 and try again later with a new design\nthat doesn't try to commandeer the existing dependency infrastructure.\nWe might have a better idea about what to do on Windows by the time\nthat's done, too.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 16 Apr 2021 16:39:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "On Sat, Apr 17, 2021 at 8:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Per the changes in collate.icu.utf8.out, this gets rid of\n> a lot of imaginary collation dependencies, but it also gets\n> rid of some arguably-real ones. In particular, calls of\n> record_eq and its siblings will be considered not to have\n> any collation dependencies, although we know that internally\n> those will look up per-column collations of their input types.\n> We could imagine special-casing record_eq etc here, but that\n> sure seems like a hack.\n\nThanks for looking into all this. Hmm.\n\n> I\"m starting to have a bad feeling about 257836a75 overall.\n> As I think I've complained before, I do not like anything about\n> what it's done to pg_depend; it's forcing that relation to serve\n> two masters, neither one well. ...\n\nWe did worry about (essentially) this question quite a bit in the\ndiscussion thread, but we figured that you'd otherwise have to create\na parallel infrastructure that would look almost identical (for\nexample [1]).\n\n> ... We now see that the same remark\n> applies to find_expr_references(), because the semantics of\n> \"which collations does this expression's behavior depend on\" aren't\n> identical to \"which collations need to be recorded as direct\n> dependencies of this expression\", especially not if you'd prefer\n> to minimize either list. (Which is important.) ...\n\nBugs in the current analyser code aside, if we had a second catalog\nand a second analyser for this stuff, then you'd still have the union\nof both minimised sets in total, with some extra duplication because\nyou'd have some rows in both places that are currently handled by one\nrow, no?\n\n> ... Moreover, for all\n> the complexity it's introducing, it's next door to useless for\n> glibc collations --- we might as well tell people \"reindex\n> everything when your glibc version changes\", which could be done\n> with a heck of a lot less infrastructure. ...\n\nYou do gain reliable tracking of which indexes remain to be rebuilt,\nand warnings for common hazards like hot standbys with mismatched\nglibc, so I think it's pretty useful. As for the poverty of\ninformation from glibc, I don't see why it should hold ICU, Windows,\nFreeBSD users back. In fact I am rather hoping that by shipping this,\nglibc developers will receive encouragement to add the trivial\ninterface we need to do better.\n\n> ... The situation on Windows\n> looks pretty user-unfriendly as well, per the other thread.\n\nThat is unfortunate, it seems like such a stupid problem. Restating\nhere for the sake of the list: initdb just needs to figure out how to\nask for the current environment's locale in BCP 47 format (\"en-US\")\nwhen setting the default for your template databases, not the\ntraditional format (\"English_United States.1252\") that Microsoft\nexplicitly tells us not to store in databases and that doesn't work in\nthe versioning API, but since we're mostly all Unix hackers we don't\nknow how.\n\n> So I wonder if, rather than continuing to pursue this right now,\n> we shouldn't revert 257836a75 and try again later with a new design\n> that doesn't try to commandeer the existing dependency infrastructure.\n> We might have a better idea about what to do on Windows by the time\n> that's done, too.\n\nIt seems to me that there are two things that would be needed to\nsalvage this for PG14: (1) deciding that we're unlikely to come up\nwith a better idea than using pg_depend for this (following the\nargument that it'd only create duplication to have a parallel\ndedicated catalog), (2) fixing any remaining flaws in the dependency\nanalyser code. I'll look into the details some more on Monday.\n\n[1] https://www.postgresql.org/message-id/e9e22c5e-c018-f4ea-24c8-5b6d6fdacf30%402ndquadrant.com\n\n\n", "msg_date": "Sat, 17 Apr 2021 10:01:53 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I'll look into the details some more on Monday.\n\nFair enough.\n\nAlthough there are only a few buildfarm members complaining, I don't\nreally want to leave them red all weekend. I could either commit the\npatch I just presented, or revert ef387bed8 ... got a preference?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Apr 2021 18:47:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "On Sat, Apr 17, 2021 at 10:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > I'll look into the details some more on Monday.\n>\n> Fair enough.\n>\n> Although there are only a few buildfarm members complaining, I don't\n> really want to leave them red all weekend. I could either commit the\n> patch I just presented, or revert ef387bed8 ... got a preference?\n\n+1 for committing the new patch for now. I will look into to the\nrecord problem. More in a couple of days.\n\n\n", "msg_date": "Sat, 17 Apr 2021 13:34:28 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Sat, Apr 17, 2021 at 10:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Although there are only a few buildfarm members complaining, I don't\n>> really want to leave them red all weekend. I could either commit the\n>> patch I just presented, or revert ef387bed8 ... got a preference?\n\n> +1 for committing the new patch for now. I will look into to the\n> record problem. More in a couple of days.\n\nOK, done.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Apr 2021 22:24:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "On Fri, Apr 16, 2021 at 01:07:52PM -0400, Tom Lane wrote:\n> I wrote:\n> > ... or maybe not just yet. Andres' buildfarm critters seem to have\n> > a different opinion than my machine about what the output of\n> > collate.icu.utf8 ought to be. I wonder what the prevailing LANG\n> > setting is for them, and which ICU version they're using.\n> \n> Oh, I bet it's \"C.utf8\", because I can reproduce the failure with that.\n> This crystallizes a nagging feeling I'd had that you were misdescribing\n> the collate.icu.utf8 test as not being run under --no-locale. Actually,\n> it's only skipped if the encoding isn't UTF8, not the same thing.\n> I think we need to remove the default-collation cases from that test too.\n\nIIUC pg_regress --no-locale will call initdb --no-locale which force the locale\nto C, and in that case pg_get_encoding_from_locale() does force SQL_ASCII as\nencoding. But yes I clearly didn't think at all that you could set the various\nenv variables to C.utf8 which can then run the collate.icu.utf8 or linux.utf8\n:(\n\n\n", "msg_date": "Sat, 17 Apr 2021 17:23:09 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "On Fri, Apr 16, 2021 at 10:24:21PM -0400, Tom Lane wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Sat, Apr 17, 2021 at 10:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Although there are only a few buildfarm members complaining, I don't\n> >> really want to leave them red all weekend. I could either commit the\n> >> patch I just presented, or revert ef387bed8 ... got a preference?\n> \n> > +1 for committing the new patch for now. I will look into to the\n> > record problem. More in a couple of days.\n> \n> OK, done.\n\nThanks for the fixes! I'll also look at the problem.\n\n\n", "msg_date": "Sat, 17 Apr 2021 17:24:43 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "On Sat, Apr 17, 2021 at 10:01:53AM +1200, Thomas Munro wrote:\n> On Sat, Apr 17, 2021 at 8:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Per the changes in collate.icu.utf8.out, this gets rid of\n> > a lot of imaginary collation dependencies, but it also gets\n> > rid of some arguably-real ones. In particular, calls of\n> > record_eq and its siblings will be considered not to have\n> > any collation dependencies, although we know that internally\n> > those will look up per-column collations of their input types.\n> > We could imagine special-casing record_eq etc here, but that\n> > sure seems like a hack.\n> \n> [...]\n> \n> > So I wonder if, rather than continuing to pursue this right now,\n> > we shouldn't revert 257836a75 and try again later with a new design\n> > that doesn't try to commandeer the existing dependency infrastructure.\n> > We might have a better idea about what to do on Windows by the time\n> > that's done, too.\n> \n> It seems to me that there are two things that would be needed to\n> salvage this for PG14: (1) deciding that we're unlikely to come up\n> with a better idea than using pg_depend for this (following the\n> argument that it'd only create duplication to have a parallel\n> dedicated catalog), (2) fixing any remaining flaws in the dependency\n> analyser code. I'll look into the details some more on Monday.\n\nSo IIUC the issue here is that the code could previously record useless\ncollation version dependencies in somes cases, which could lead to false\npositive possible corruption messages (and of course additional bloat on\npg_depend). False positive messages can't be avoided anyway, as a collation\nversion update may not corrupt the actually indexed set of data, especially for\nglibc. But with the infrastructure as-is advanced user can look into the new\nversion changes and choose to ignore changes for a specific set of collation,\nwhich is way easier to do with the recorded dependencies.\n\nThe new situation is now that the code can record too few version dependencies\nleading to false negative detection, which is way more problematic.\n\nThis was previously discussed around [1]. Quoting Thomas:\n\n> To state more explicitly what's happening here, we're searching the\n> expression trees for subexpresions that have a collation as part of\n> their static type. We don't know which functions or operators are\n> actually affected by the collation, though. For example, if an\n> expression says \"x IS NOT NULL\" and x happens to be a subexpression of\n> a type with a particular collation, we don't now that this\n> expression's value can't possibly be affected by the collation version\n> changing. So, the system will nag you to rebuild an index just\n> because you mentioned it, even though the index can't be corrupted.\n> To do better than that, I suppose we'd need declarations in the\n> catalog to say which functions/operators are collation sensitive.\n\nWe agreed that having possible false positive dependencies was acceptable for\nthe initial implementation and that we will improve it in later versions, as\notherwise the alternative is to reindex everything without getting any warning,\nwhich clearly isn't better anyway.\n\nFTR was had the same agreement to not handle specific AMs that don't care about\ncollation (like hash or bloom) in [2], even though I provided a patch to handle\nthat case ([3]) which was dropped later on ([4]).\n\nProperly and correctly handling collation version dependency in expressions is\na hard problem and will definitely require additional fields in pg_proc, so we\nclearly can't add that in pg14. So yes we have to decide whether we want to\nkeep the feature in pg14 with the known limitations (and in that case probably\nrevert f24b15699, possibly improving documentation on the possibility of false\npositive) or revert it entirely.\n\nUnsurprisingly, I think that the feature as-is is already a significant\nimprovement, which can be easily improved, so my vote is to keep it in pg14.\nAnd just to be clear I'm volunteering to work on the expression problem and all\nother related improvements for the next version, whether the current feature is\nreverted or not.\n\n[1]: https://www.postgresql.org/message-id/CA%2BhUKGK8CwBcTcXWL2kUjpHT%2B6t2hEFCzkcZ-Z7xXbz%3DC4NLCQ%40mail.gmail.com\n[2]: https://www.postgresql.org/message-id/13b0c950-80f9-4c10-7e0f-f59feac56a98%402ndquadrant.com\n[3]: https://www.postgresql.org/message-id/20200908144507.GA57691%40nol\n[4]: https://www.postgresql.org/message-id/CA%2BhUKGKHj4aYmmwKZdZjkD%3DCWRmn%3De6UsS7S%2Bu6oLrrp0orgsw%40mail.gmail.com\n\n\n", "msg_date": "Sun, 18 Apr 2021 19:23:33 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Sat, Apr 17, 2021 at 10:01:53AM +1200, Thomas Munro wrote:\n>> It seems to me that there are two things that would be needed to\n>> salvage this for PG14: (1) deciding that we're unlikely to come up\n>> with a better idea than using pg_depend for this (following the\n>> argument that it'd only create duplication to have a parallel\n>> dedicated catalog), (2) fixing any remaining flaws in the dependency\n>> analyser code. I'll look into the details some more on Monday.\n\n> So IIUC the issue here is that the code could previously record useless\n> collation version dependencies in somes cases, ...\n> The new situation is now that the code can record too few version dependencies\n> leading to false negative detection, which is way more problematic.\n\nI'm not sure that an error in this direction is all that much more\nproblematic than the other direction. If it's okay to claim that\nindexes need to be rebuilt when they don't really, then we could just\ndrop this entire overcomplicated infrastructure and report that all\nindexes need to be rebuilt after any collation version change.\n\nBut in any case you're oversimplifying tremendously. The previous code is\njust as capable of errors of omission, because it was inquiring into the\nwrong composite types, ie those of leaf expression nodes. The ones we'd\nneed to look at are the immediate inputs of record_eq and siblings. Here\nare a couple of examples where the leaf types are unhelpful:\n\n... where row(a,b,c)::composite_type < row(d,e,f)::composite_type;\n... where function_returning_composite(...) < function_returning_composite(...);\n\nAnd even if we do this, we're not entirely in the clear in an abstract\nsense, because this only covers cases in which an immediate input is\nof a known named composite type. Cases dealing in anonymous RECORD\ntypes simply can't be resolved statically. It might be that that\ncan't occur in the specific situation of CREATE INDEX expressions,\nbut I'm not 100% sure of it. The apparent counterexample of\n\n... where row(a,b) < row(a,c)\n\nisn't one because we parse that as RowCompareExpr not an application\nof record_lt.\n\n> We agreed that having possible false positive dependencies was acceptable for\n> the initial implementation and that we will improve it in later versions, as\n> otherwise the alternative is to reindex everything without getting any warning,\n> which clearly isn't better anyway.\n\n[ shrug... ] You have both false positives and false negatives in the\nthing as it stood before f24b15699. I'm not convinced that it's possible\nto completely avoid either issue via static analysis. I'm inclined to\nthink that false negatives around record_eq-like functions are not such a\nproblem for real index definitions, and we'd be better off with fewer\nfalse positives. But it's all judgment calls.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 18 Apr 2021 11:29:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "Hi,\n\nOn 2021-04-18 11:29:42 -0400, Tom Lane wrote:\n> I'm not sure that an error in this direction is all that much more\n> problematic than the other direction. If it's okay to claim that\n> indexes need to be rebuilt when they don't really, then we could just\n> drop this entire overcomplicated infrastructure and report that all\n> indexes need to be rebuilt after any collation version change.\n\nThat doesn't ring true to me. There's a huge difference between needing\nto rebuild all indexes, especially primary key indexes which often are\nover int8 etc, and unnecessarily needing to rebuild indexes doing\ncomparatively rare things.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 19 Apr 2021 10:36:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-04-18 11:29:42 -0400, Tom Lane wrote:\n>> I'm not sure that an error in this direction is all that much more\n>> problematic than the other direction. If it's okay to claim that\n>> indexes need to be rebuilt when they don't really, then we could just\n>> drop this entire overcomplicated infrastructure and report that all\n>> indexes need to be rebuilt after any collation version change.\n\n> That doesn't ring true to me. There's a huge difference between needing\n> to rebuild all indexes, especially primary key indexes which often are\n> over int8 etc, and unnecessarily needing to rebuild indexes doing\n> comparatively rare things.\n\nIt would not be that hard to exclude indexes on int8, or other cases\nthat clearly have zero collation dependencies. And I think I might\nhave some faith in such a solution. Right now I have zero faith\nthat the patch as it stands gives trustworthy answers.\n\nI think that the real fundamental bug is supposing that static analysis\ncan give 100% correct answers. Even if it did do so in a given state\nof the database, consider this counterexample:\n\ncreate type myrow as (f1 int, f2 int);\ncreate table mytable (id bigint, r1 myrow, r2 myrow);\ncreate index myindex on mytable(id) where r1 < r2;\nalter type myrow add attribute f3 text;\n\nmyindex is recorded as having no collation dependency, but that is\nnow wrong.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Apr 2021 13:52:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "On Mon, Apr 19, 2021 at 10:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think that the real fundamental bug is supposing that static analysis\n> can give 100% correct answers. Even if it did do so in a given state\n> of the database, consider this counterexample:\n>\n> create type myrow as (f1 int, f2 int);\n> create table mytable (id bigint, r1 myrow, r2 myrow);\n> create index myindex on mytable(id) where r1 < r2;\n> alter type myrow add attribute f3 text;\n>\n> myindex is recorded as having no collation dependency, but that is\n> now wrong.\n\nIs it really the case that static analysis of the kind that you'd need\nto make this 100% robust is fundamentally impossible? I find that\nproposition hard to believe.\n\nI'm not sure that you were making a totally general statement, rather\nthan a statement about the patch/implementation, so perhaps I just\nmissed the point.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 19 Apr 2021 11:04:26 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "On Sun, Apr 18, 2021 at 4:23 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> So IIUC the issue here is that the code could previously record useless\n> collation version dependencies in somes cases, which could lead to false\n> positive possible corruption messages (and of course additional bloat on\n> pg_depend). False positive messages can't be avoided anyway, as a collation\n> version update may not corrupt the actually indexed set of data, especially for\n> glibc.\n\nThis argument seems completely absurd to me.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 19 Apr 2021 11:13:37 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Mon, Apr 19, 2021 at 10:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think that the real fundamental bug is supposing that static analysis\n>> can give 100% correct answers.\n\n> Is it really the case that static analysis of the kind that you'd need\n> to make this 100% robust is fundamentally impossible? I find that\n> proposition hard to believe.\n\nI didn't mean to imply that it's necessarily theoretically impossible,\nbut given our lack of visibility into what a function or operator\nwill do, plus the way that the collation feature was bolted on\nwith minimal system-level redesign, it's sure pretty darn hard.\nCode like record_eq is doing a lot at runtime that we can't really\nsee from static analysis.\n\nAnyway, given the ALTER TYPE ADD ATTRIBUTE counterexample, I'm\ndefinitely starting to lean towards \"revert and try again in v15\".\nI feel we'd be best off to consider functions/operators that\noperate on container types to be \"maybe\"s rather than certainly\nsafe or certainly not safe. I think that such things appear\nsufficiently rarely in index specifications that it's not worth it\nto try to do an exact analysis of them, even if we were sure we\ncould get that 100% right. But that doesn't seem to be an idea that\ncan trivially be added to the current design.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Apr 2021 14:49:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "On Mon, Apr 19, 2021 at 11:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I didn't mean to imply that it's necessarily theoretically impossible,\n> but given our lack of visibility into what a function or operator\n> will do, plus the way that the collation feature was bolted on\n> with minimal system-level redesign, it's sure pretty darn hard.\n> Code like record_eq is doing a lot at runtime that we can't really\n> see from static analysis.\n\nIt's worth pointing out that code like record_eq is not (or at least\nshould not be) fundamentally unpredictable and unruly. The fact that\nrecord_eq does typecache lookups and whatnot seems to me to be an\nimplementation detail. What record_eq is entitled to assume about\ncollations could be formalized by some general high-level\nspecification. It ought to be possible to do this, just as it ought to\nbe possible for us to statically determine if a composite type is safe\nto use with B-Tree deduplication.\n\nWhether or not it's worth the trouble is another matter, but it might\nbe if a single effort solved a bunch of related problems, not just the\ncollation dependency problem.\n\n> Anyway, given the ALTER TYPE ADD ATTRIBUTE counterexample, I'm\n> definitely starting to lean towards \"revert and try again in v15\".\n\nThe counterexample concerns me because it seems to indicate a lack of\nsophistication in how dependencies are managed with corner cases -- I\ndon't think that it's okay to leave the behavior unspecified in a\nstable release. But I also think that we should consider if code like\nrecord_eq is in fact the real problem (or just the lack of any general\nspecification that constrains code like it in useful ways, perhaps).\nThis probably won't affect whether or not the patch gets reverted now,\nbut it still matters.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 19 Apr 2021 12:38:27 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "On Tue, Apr 20, 2021 at 5:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think that the real fundamental bug is supposing that static analysis\n> can give 100% correct answers. ...\n\nWell, the goal was to perform analysis to the extent possible\nstatically since that would cover the vast majority of cases and is\npractically all you can do. Clearly there is always going to be a\ncategory of invisible dependencies inside procedural code in general\n(halting problem). We did think about the idea of using new\ndeclarations about functions/operators to know which ones actually\ncare about collation, rather than assuming that they all do (bugs\naside), as an optimisation, and then that mechanism could in theory\nalso be used to say that functions that don't appear to depend on\ncollations actually do internal, but that all seemed like vast\noverkill, so we left it for possible later improvements. The question\non my mind is whether reverting the feature and trying again for 15\ncould produce anything fundamentally better at a design level, or\nwould just fix problems in the analyser code that we could fix right\nnow. For example, if you think there actually is a potential better\nplan than using pg_depend for this, that'd definitely be good to know\nabout.\n\n> ... Even if it did do so in a given state\n> of the database, consider this counterexample:\n>\n> create type myrow as (f1 int, f2 int);\n> create table mytable (id bigint, r1 myrow, r2 myrow);\n> create index myindex on mytable(id) where r1 < r2;\n> alter type myrow add attribute f3 text;\n>\n> myindex is recorded as having no collation dependency, but that is\n> now wrong.\n\nHrmph. Yeah. We didn't consider types that change later like this,\nand handling those correctly does seem to warrant some more thought\nand work than we perhaps have time for. My first thought is that we'd\nneed to teach it to trigger reanalysis.\n\n\n", "msg_date": "Tue, 20 Apr 2021 07:42:42 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> ... The question\n> on my mind is whether reverting the feature and trying again for 15\n> could produce anything fundamentally better at a design level, or\n> would just fix problems in the analyser code that we could fix right\n> now.\n\nWell, as I said, I think what we ought to do is treat any record-accepting\nfunctions/operators as \"don't know, better assume it's collation\ndependent\". What's not clear to me is how that concept could be\nshoehorned into the existing design.\n\n> For example, if you think there actually is a potential better\n> plan than using pg_depend for this, that'd definitely be good to know\n> about.\n\nI really dislike using pg_depend, for a couple of reasons:\n\n* You've broken the invariant that dependencies on pinned objects\nare never recorded. Now, some of them exist, for reasons having\nnothing to do with the primary goals of pg_depend. If that's not\na sign of bad relational design, I don't know what is. I didn't\nlook at the code, but I wonder if you didn't have to lobotomize\nsome error checks in dependency.c because of that. (Perhaps\nsome sort of special-case representation for the default\ncollation would help here?)\n\n* pg_depend used to always be all-not-null. Now, most rows in it\nwill need a nulls bitmap, adding 8 bytes per row (on maxalign=8\nhardware) to what had been fairly narrow rows. By my arithmetic\nthat's 13.3% bloat in what is already one of our largest\ncatalogs. That's quite unpleasant. (It would actually be\ncheaper to store an empty-string refobjversion for non-collation\nentries; a single-byte string would fit into the pad space\nafter deptype, adding nothing to the row width.)\n\n> Hrmph. Yeah. We didn't consider types that change later like this,\n> and handling those correctly does seem to warrant some more thought\n> and work than we perhaps have time for. My first thought is that we'd\n> need to teach it to trigger reanalysis.\n\nThat seems like a nonstarter, even before you think about race\nconditions.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Apr 2021 16:21:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "On Tue, Apr 20, 2021 at 8:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:>\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > For example, if you think there actually is a potential better\n> > plan than using pg_depend for this, that'd definitely be good to know\n> > about.\n>\n> I really dislike using pg_depend, for a couple of reasons:\n>\n> * You've broken the invariant that dependencies on pinned objects\n> are never recorded. Now, some of them exist, for reasons having\n> nothing to do with the primary goals of pg_depend. If that's not\n> a sign of bad relational design, I don't know what is. I didn't\n> look at the code, but I wonder if you didn't have to lobotomize\n> some error checks in dependency.c because of that. (Perhaps\n> some sort of special-case representation for the default\n> collation would help here?)\n\nHmm, OK, thanks, that's something to go back and think about.\n\n> * pg_depend used to always be all-not-null. Now, most rows in it\n> will need a nulls bitmap, adding 8 bytes per row (on maxalign=8\n> hardware) to what had been fairly narrow rows. By my arithmetic\n> that's 13.3% bloat in what is already one of our largest\n> catalogs. That's quite unpleasant. (It would actually be\n> cheaper to store an empty-string refobjversion for non-collation\n> entries; a single-byte string would fit into the pad space\n> after deptype, adding nothing to the row width.)\n\nThat seems like a good idea.\n\n> > Hrmph. Yeah. We didn't consider types that change later like this,\n> > and handling those correctly does seem to warrant some more thought\n> > and work than we perhaps have time for. My first thought is that we'd\n> > need to teach it to trigger reanalysis.\n>\n> That seems like a nonstarter, even before you think about race\n> conditions.\n\nYeah, that runs directly into non-trivial locking problems. I felt\nlike some of the other complaints could conceivably be addressed in\ntime, including dumb stuff like Windows default locale string format\nand hopefully some expression analysis problems, but not this. I'll\nhold off reverting for a few more days to see if anyone has any other\nthoughts on that, because there doesn't seem to be any advantage in\nbeing too hasty about it.\n\n\n", "msg_date": "Tue, 20 Apr 2021 12:05:27 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "On Mon, Apr 19, 2021 at 11:13:37AM -0700, Peter Geoghegan wrote:\n> On Sun, Apr 18, 2021 at 4:23 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > So IIUC the issue here is that the code could previously record useless\n> > collation version dependencies in somes cases, which could lead to false\n> > positive possible corruption messages (and of course additional bloat on\n> > pg_depend). False positive messages can't be avoided anyway, as a collation\n> > version update may not corrupt the actually indexed set of data, especially for\n> > glibc.\n> \n> This argument seems completely absurd to me.\n\nI'm not sure why? For glibc at least, I don't see how we could not end up\nraising false positive as you have a single glibc version for all its\ncollations. If a user has say en_US and fr_FR, or any quite stable collation,\nmost of the glibc upgrades (except 2.28 of course) won't corrupt your indexes.\n\n\n", "msg_date": "Tue, 20 Apr 2021 09:46:07 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "On Tue, Apr 20, 2021 at 12:05:27PM +1200, Thomas Munro wrote:\n> \n> Yeah, that runs directly into non-trivial locking problems. I felt\n> like some of the other complaints could conceivably be addressed in\n> time, including dumb stuff like Windows default locale string format\n> and hopefully some expression analysis problems, but not this. I'll\n> hold off reverting for a few more days to see if anyone has any other\n> thoughts on that, because there doesn't seem to be any advantage in\n> being too hasty about it.\n\nI also feel that the ALTER TYPE example Tom showed earlier isn't something\ntrivial to fix and cannot be done in pg14 :(\n\n\n", "msg_date": "Tue, 20 Apr 2021 09:49:08 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "On Mon, Apr 19, 2021 at 6:45 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > This argument seems completely absurd to me.\n>\n> I'm not sure why? For glibc at least, I don't see how we could not end up\n> raising false positive as you have a single glibc version for all its\n> collations. If a user has say en_US and fr_FR, or any quite stable collation,\n> most of the glibc upgrades (except 2.28 of course) won't corrupt your indexes.\n\nIf the versions differ and your index happens to not be corrupt\nbecause it just so happened to not depend on any of the rules that\nhave changed, then a complaint about the collation versions changing\nis not what I'd call a false positive. You can call it that if you\nwant, I suppose -- it's just a question of semantics. But I don't\nthink you should conflate two very different things. You seem to be\nsuggesting that they're equivalent just because you can refer to both\nof them using the same term.\n\nIt's obvious that you could have an absence of index corruption even\nin the presence of a collation incompatibility. Especially when there\nis only 1 tuple in the index, say -- obviously the core idea is to\nmanage the dependency on versioned collations, which isn't magic. Do\nyou really think that's equivalent to having incorrect version\ndependencies?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 19 Apr 2021 19:27:24 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "On Mon, Apr 19, 2021 at 07:27:24PM -0700, Peter Geoghegan wrote:\n> On Mon, Apr 19, 2021 at 6:45 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > > This argument seems completely absurd to me.\n> >\n> > I'm not sure why? For glibc at least, I don't see how we could not end up\n> > raising false positive as you have a single glibc version for all its\n> > collations. If a user has say en_US and fr_FR, or any quite stable collation,\n> > most of the glibc upgrades (except 2.28 of course) won't corrupt your indexes.\n> \n> If the versions differ and your index happens to not be corrupt\n> because it just so happened to not depend on any of the rules that\n> have changed, then a complaint about the collation versions changing\n> is not what I'd call a false positive. You can call it that if you\n> want, I suppose -- it's just a question of semantics. But I don't\n> think you should conflate two very different things. You seem to be\n> suggesting that they're equivalent just because you can refer to both\n> of them using the same term.\n> \n> It's obvious that you could have an absence of index corruption even\n> in the presence of a collation incompatibility. Especially when there\n> is only 1 tuple in the index, say \n\nYes, and technically you could still have corruption on indexes containing 1 or\neven 0 rows in case of collation provider upgrade, eg if you have a WHERE\nclause on the index that does depend on a collation.\n\n> -- obviously the core idea is to\n> manage the dependency on versioned collations, which isn't magic. Do\n> you really think that's equivalent to having incorrect version\n> dependencies?\n\nNo I don't think that's equivalent. What I wanted to say that it's impossible\nto raise a WARNING only if the index can really be corrupted (corner cases like\nempty tables or similar apart) for instance because of how glibc report\nversions, so raising WARNING in some limited corner cases that definitely can't\nbe corrupted (like because the index expression itself doesn't depend on the\nordering), which clearly isn't the same thing, was in my opinion an acceptable\ntrade-off in a first version. Sorry if that was (or still is) poorly worded.\n\nIn any case it was proven that the current approach has way bigger deficiencies\nso it's probably not relevant anymore.\n\n\n", "msg_date": "Tue, 20 Apr 2021 11:02:31 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "On Tue, Apr 20, 2021 at 1:48 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> On Tue, Apr 20, 2021 at 12:05:27PM +1200, Thomas Munro wrote:\n> > Yeah, that runs directly into non-trivial locking problems. I felt\n> > like some of the other complaints could conceivably be addressed in\n> > time, including dumb stuff like Windows default locale string format\n> > and hopefully some expression analysis problems, but not this. I'll\n> > hold off reverting for a few more days to see if anyone has any other\n> > thoughts on that, because there doesn't seem to be any advantage in\n> > being too hasty about it.\n>\n> I also feel that the ALTER TYPE example Tom showed earlier isn't something\n> trivial to fix and cannot be done in pg14 :(\n\nJust an idea: It might be possible to come up with a scheme where\nALTER TYPE ADD ATTRIBUTE records versions somewhere at column add\ntime, and index_check_collation_versions() finds and checks those when\nthey aren't superseded by index->collation versions created by\nREINDEX, or already present due to other dependencies on the same\ncollation. Of course, the opposite problem applies when you ALTER\nTYPE DROP ATTRIBUTE: you might have some zombie refobjversions you\ndon't need anymore, but that would seem to be the least of your\nworries if you drop attributes from composite types used in indexes:\n\ncreate type myrow as (f1 int, f2 int);\ncreate table mytable (r1 myrow primary key);\ninsert into mytable\nselect row(generate_series(1, 10), generate_series(10, 1, -1))::myrow;\nselect * from mytable;\nalter type myrow drop attribute f1;\nselect * from mytable;\nselect * from mytable where r1 = row(6); -- !!!\nreindex table mytable;\nselect * from mytable where r1 = row(6);\n\n\n", "msg_date": "Tue, 20 Apr 2021 15:07:16 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "Hi,\n\nOn 2021-04-20 12:05:27 +1200, Thomas Munro wrote:\n> I'll hold off reverting for a few more days to see if anyone has any\n> other thoughts on that, because there doesn't seem to be any advantage\n> in being too hasty about it.\n\nI'm not really convinced that this is warranted, and that it isn't\nbetter addressed by reducing the scope of the feature:\n\nWhen using index collation versions to decide whether to reindex\nindividual indexes it is important to not have any false negatives -\notherwise the feature could trigger corruption.\n\nHowever, the feature has a second, IMO more crucial, aspect: Preventing\nsilent corruption due to collation changes. There are regular reports of\npeople corrupting their indexes (and subsequently constraints) due to\ncollation changes (or collation differences between primary/replica).\nTo be effective detecting such cases it is not required to catch 100% of\nall dangerous cases, just that a high fraction of cases is caught.\n\nAnd handling the composite type case doesn't seem like it'd impact the\npercentage of detected collation issues all that much. For one, indexes\non composite types aren't all that common, and additing new columns to\nthose composite types is likely even rarer. For another, I'd expect that\nnearly all databases that have indexes on composite types also have\nindexes on non-composite text columns - which'd be likely to catch the\nissue.\n\nGiven that this is a regularly occurring source of corruption for users,\nand not one just negligent operators run into (we want people to upgrade\nOS versions), I think we ought to factor that into our decision what to\ndo.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 21 Apr 2021 13:28:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "\nOn 4/21/21 4:28 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2021-04-20 12:05:27 +1200, Thomas Munro wrote:\n>> I'll hold off reverting for a few more days to see if anyone has any\n>> other thoughts on that, because there doesn't seem to be any advantage\n>> in being too hasty about it.\n> I'm not really convinced that this is warranted, and that it isn't\n> better addressed by reducing the scope of the feature:\n>\n> When using index collation versions to decide whether to reindex\n> individual indexes it is important to not have any false negatives -\n> otherwise the feature could trigger corruption.\n>\n> However, the feature has a second, IMO more crucial, aspect: Preventing\n> silent corruption due to collation changes. There are regular reports of\n> people corrupting their indexes (and subsequently constraints) due to\n> collation changes (or collation differences between primary/replica).\n> To be effective detecting such cases it is not required to catch 100% of\n> all dangerous cases, just that a high fraction of cases is caught.\n>\n> And handling the composite type case doesn't seem like it'd impact the\n> percentage of detected collation issues all that much. For one, indexes\n> on composite types aren't all that common, and additing new columns to\n> those composite types is likely even rarer. For another, I'd expect that\n> nearly all databases that have indexes on composite types also have\n> indexes on non-composite text columns - which'd be likely to catch the\n> issue.\n>\n> Given that this is a regularly occurring source of corruption for users,\n> and not one just negligent operators run into (we want people to upgrade\n> OS versions), I think we ought to factor that into our decision what to\n> do.\n>\n\n\nHi,\n\n\nthis is an open item for release 14 . The discussion seems to have gone\nsilent for a couple of weeks. Are we in a position to make any\ndecisions? I hear what Andres says, but is anyone acting on it?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 5 May 2021 16:58:08 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "On Thu, May 6, 2021 at 8:58 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> this is an open item for release 14 . The discussion seems to have gone\n> silent for a couple of weeks. Are we in a position to make any\n> decisions? I hear what Andres says, but is anyone acting on it?\n\nI'm going to revert this and resubmit for 15. That'll give proper\ntime to reconsider the question of whether pg_depend is right for\nthis, and come up with a non-rushed response to the composite type\nproblem etc.\n\n\n", "msg_date": "Thu, 6 May 2021 09:12:18 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "\nOn 5/5/21 5:12 PM, Thomas Munro wrote:\n> On Thu, May 6, 2021 at 8:58 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> this is an open item for release 14 . The discussion seems to have gone\n>> silent for a couple of weeks. Are we in a position to make any\n>> decisions? I hear what Andres says, but is anyone acting on it?\n> I'm going to revert this and resubmit for 15. That'll give proper\n> time to reconsider the question of whether pg_depend is right for\n> this, and come up with a non-rushed response to the composite type\n> problem etc.\n\n\nOK, thanks.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 5 May 2021 17:23:16 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" }, { "msg_contents": "On Thu, May 6, 2021 at 9:23 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> On 5/5/21 5:12 PM, Thomas Munro wrote:\n> > On Thu, May 6, 2021 at 8:58 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> >> this is an open item for release 14 . The discussion seems to have gone\n> >> silent for a couple of weeks. Are we in a position to make any\n> >> decisions? I hear what Andres says, but is anyone acting on it?\n> > I'm going to revert this and resubmit for 15. That'll give proper\n> > time to reconsider the question of whether pg_depend is right for\n> > this, and come up with a non-rushed response to the composite type\n> > problem etc.\n>\n> OK, thanks.\n\nReverted. Rebasing notes:\n\n1. Commit b4c9695e moved toast table declarations so I adapted to the\nnew scheme, but commit 0cc99327 had taken the OIDs that pg_collation\nwas previously using, so I had to pick some new ones from the\ntemporary range for later reassignment.\n\n2. It took me quite a while to figure out that the collversion column\nnow needs BKI_DEFAULT(_null_), or the perl script wouldn't accept the\ncontents of pg_collation.dat.\n\n3. In a separate commit, I rescued a few sentences of text from the\ndocumentation about libc collation versions and reinstated them in the\nmost obvious place, because although the per-index tracking has been\nreverted, the per-collation version tracking (limited as it is) is now\nback and works on more OSes than before.\n\n\n", "msg_date": "Fri, 7 May 2021 22:01:40 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bogus collation version recording in recordMultipleDependencies" } ]
[ { "msg_contents": "In [1] Andres and I speculated about whether we really need all\nthose PIN entries in pg_depend. Here is a draft patch that gets\nrid of them.\n\nIt turns out to be no big problem to replace the PIN entries\nwith an OID range check, because there's a well-defined point\nin initdb where it wants to pin (almost) all existing objects,\nand then no objects created after that are pinned. In the earlier\nthread I'd imagined having initdb record the OID counter at that\npoint in pg_control, and then we could look at the recorded counter\nvalue to make is-it-pinned decisions. However, that idea has a\nfatal problem: what shall pg_resetwal fill into that field when\nit has to gin up a pg_control file from scratch? There's no\ngood way to reconstruct the value.\n\nHence, what this patch does is to establish a manually-managed cutoff\npoint akin to FirstBootstrapObjectId, and make initdb push the OID\ncounter up to that once it's made the small number of pinned objects\nit's responsible for. With the value I used here, a couple hundred\nOIDs are wasted, but there seems to be a reasonable amount of headroom\nstill beyond that. On my build, the OID counter at the end of initdb\nis 15485 (with a reasonable number of glibc and ICU locales loaded).\nSo we still have about 900 free OIDs there; and there are 500 or so\nfree just below FirstBootstrapObjectId, too. So this approach does\nhasten the day when we're going to run out of free OIDs below 16384,\nbut not by all that much.\n\nThere are a couple of objects, namely template1 and the public\nschema, that are in the catalog .dat files but are not supposed\nto be pinned. The existing code accomplishes that by excluding them\n(in two different ways :-() while filling pg_depend. This patch\njust hard-wires exceptions for them in IsPinnedObject(), which seems\nto me not much uglier than what we had before. The existing code\nalso handles pinning of the standard tablespaces in an idiosyncratic\nway; I just dropped that and made them be treated as pinned.\n\nOne interesting point about doing things this way is that\nIsPinnedObject() will give correct answers throughout initdb, whereas\nbefore the backend couldn't tell what was supposed to be pinned until\nafter initdb loaded pg_depend. This means we don't need the hacky\ntruncation of pg_depend and pg_shdepend that initdb used to do,\nbecause now the backend will correctly not make entries relating to\nobjects it now knows are pinned. Aside from saving a few cycles,\nthis is more correct. For example, if some object that initdb made\nafter bootstrap but before truncating pg_depend had a dependency on\nthe public schema, the existing coding would lose track of that fact.\n(There's no live issue of that sort, I hasten to say, and really it\nwould be a bug to set things up that way because then you couldn't\ndrop the public schema. But the existing coding would make things\nworse by not detecting the mistake.)\n\nAnyway, as to concrete results:\n\n* pg_depend's total relation size, in a freshly made database,\ndrops from 1269760 bytes to 368640 bytes.\n\n* There seems to be a small but noticeable reduction in the time\nto run check-world. I compared runtimes on a not-particularly-modern\nmachine with spinning-rust storage, using -j4 parallelism:\n\nHEAD\nreal 5m4.248s\nuser 2m59.390s\nsys 1m21.473s\n\n+ patch\nreal 5m2.924s\nuser 2m36.196s\nsys 1m19.724s\n\nThese top-line numbers don't look too impressive, but the CPU-time\nreduction seems quite significant. Probably on a different hardware\nplatform that would translate more directly to runtime savings.\n\nI didn't try to reproduce the original performance bottleneck\nthat was complained of in [1], but that might be fun to check.\n\nAnyway, I'll stick this in the next CF so we don't lose track\nof it.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/947172.1617684433%40sss.pgh.pa.us#6a3d250a9c4a994cb3a26c87384fc823", "msg_date": "Wed, 14 Apr 2021 21:43:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Replacing pg_depend PIN entries with a fixed range check" }, { "msg_contents": "[ connecting up two threads here ]\n\nI wrote:\n> Hence, what this patch does is to establish a manually-managed cutoff\n> point akin to FirstBootstrapObjectId, and make initdb push the OID\n> counter up to that once it's made the small number of pinned objects\n> it's responsible for. With the value I used here, a couple hundred\n> OIDs are wasted, but there seems to be a reasonable amount of headroom\n> still beyond that. On my build, the OID counter at the end of initdb\n> is 15485 (with a reasonable number of glibc and ICU locales loaded).\n> So we still have about 900 free OIDs there; and there are 500 or so\n> free just below FirstBootstrapObjectId, too. So this approach does\n> hasten the day when we're going to run out of free OIDs below 16384,\n> but not by all that much.\n\nIn view of the discussion at [1], there's more pressure on the OID supply\nabove 10K than I'd realized. While I don't have any good ideas about\neliminating the problem altogether, I did have a thought that would remove\nthe extra buffer zone created by my first-draft patch in this thread.\nNamely, let's have genbki.pl write out its final OID assignment counter\nvalue in a command in the postgres.bki file, say \"set_next_oid 12036\".\nThis would cause the bootstrap backend to set the server's OID counter to\nthat value. Then the initial part of initdb's post-bootstrap processing\ncould assign pinned OIDs working forward from there, with no gap. We'd\nstill need a gap before FirstBootstrapObjectId (which we might as well\nrename to FirstUnpinnedObjectId), but we don't need two gaps, and so this\npatch wouldn't make things any worse than they are today.\n\nI'm not planning to put more effort into this patch right now, but\nI'll revise it along these lines once v15 development opens.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CAGPqQf3JYTrTB1E1fu_zOGj%2BrG_kwTfa3UcUYPfNZL9o1bcYNw%40mail.gmail.com\n\n\n", "msg_date": "Thu, 15 Apr 2021 12:37:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Replacing pg_depend PIN entries with a fixed range check" }, { "msg_contents": "Hi,\n\nOn 2021-04-14 21:43:28 -0400, Tom Lane wrote:\n> In [1] Andres and I speculated about whether we really need all\n> those PIN entries in pg_depend. Here is a draft patch that gets\n> rid of them.\n\nYay.\n\n> There are a couple of objects, namely template1 and the public\n> schema, that are in the catalog .dat files but are not supposed\n> to be pinned. The existing code accomplishes that by excluding them\n> (in two different ways :-() while filling pg_depend. This patch\n> just hard-wires exceptions for them in IsPinnedObject(), which seems\n> to me not much uglier than what we had before. The existing code\n> also handles pinning of the standard tablespaces in an idiosyncratic\n> way; I just dropped that and made them be treated as pinned.\n\nHm, maybe we ought to swap template0 and template1 instead? I.e. have\ntemplate0 be in pg_database.dat and thus get a pinned oid, and then\ncreate template1, postgres etc from that?\n\nI guess we could also just create public in initdb.\n\nNot that it matters much, having those exceptions doesn't seem too bad.\n\n\n\n> Anyway, as to concrete results:\n> \n> * pg_depend's total relation size, in a freshly made database,\n> drops from 1269760 bytes to 368640 bytes.\n\nNice!\n\n\n\n> I didn't try to reproduce the original performance bottleneck\n> that was complained of in [1], but that might be fun to check.\n\nI hope it's not reproducible as is, because I hopefully did fix the bug\nleading to it ;)\n\n> +bool\n> +IsPinnedObject(Oid classId, Oid objectId)\n> +{\n> +\t/*\n> +\t * Objects with OIDs above FirstUnpinnedObjectId are never pinned. Since\n> +\t * the OID generator skips this range when wrapping around, this check\n> +\t * guarantees that user-defined objects are never considered pinned.\n> +\t */\n> +\tif (objectId >= FirstUnpinnedObjectId)\n> +\t\treturn false;\n> +\n> +\t/*\n> +\t * Large objects are never pinned. We need this special case because\n> +\t * their OIDs can be user-assigned.\n> +\t */\n> +\tif (classId == LargeObjectRelationId)\n> +\t\treturn false;\n> +\n\nHuh, shouldn't we reject that when creating them? IIRC we already use\noid range checks in a bunch of places? I guess you didn't because of\ndump/restore concerns?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 15 Apr 2021 16:48:12 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Replacing pg_depend PIN entries with a fixed range check" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Hm, maybe we ought to swap template0 and template1 instead? I.e. have\n> template0 be in pg_database.dat and thus get a pinned oid, and then\n> create template1, postgres etc from that?\n\nNo, *neither* of them are pinned, and we don't want them to be.\nIt's something of a historical artifact that template1 has a low OID.\n\n>> +\t/*\n>> +\t * Large objects are never pinned. We need this special case because\n>> +\t * their OIDs can be user-assigned.\n>> +\t */\n>> +\tif (classId == LargeObjectRelationId)\n>> +\t\treturn false;\n\n> Huh, shouldn't we reject that when creating them?\n\nWe've got regression tests that create blobs with small OIDs :-(.\nWe could change those tests of course, but they're pretty ancient\nand I'm hesitant to move those goal posts.\n\n> I guess you didn't because of dump/restore concerns?\n\nThat too.\n\nIn short, I'm really skeptical of changing any of these pin-or-not\ndecisions to save one or two comparisons in IsPinnedObject. That\nfunction is already orders of magnitude faster than what it replaces;\nwe don't need to sweat over making it faster yet.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 15 Apr 2021 19:59:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Replacing pg_depend PIN entries with a fixed range check" }, { "msg_contents": "Hi,\n\nOn 2021-04-15 19:59:24 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Hm, maybe we ought to swap template0 and template1 instead? I.e. have\n> > template0 be in pg_database.dat and thus get a pinned oid, and then\n> > create template1, postgres etc from that?\n> \n> No, *neither* of them are pinned, and we don't want them to be.\n> It's something of a historical artifact that template1 has a low OID.\n\nHm, it makes sense for template1 not to be pinned, but it doesn't seem\nas obvious why that should be the case for template0.\n\n\n> In short, I'm really skeptical of changing any of these pin-or-not\n> decisions to save one or two comparisons in IsPinnedObject. That\n> function is already orders of magnitude faster than what it replaces;\n> we don't need to sweat over making it faster yet.\n\nI'm not at all concerned about the speed after the change - it just\nseems cleaner and easier to understand not to have exceptions.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 15 Apr 2021 17:05:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Replacing pg_depend PIN entries with a fixed range check" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-04-15 19:59:24 -0400, Tom Lane wrote:\n>> No, *neither* of them are pinned, and we don't want them to be.\n>> It's something of a historical artifact that template1 has a low OID.\n\n> Hm, it makes sense for template1 not to be pinned, but it doesn't seem\n> as obvious why that should be the case for template0.\n\nIIRC, the docs suggest that in an emergency you could recreate either\nof them from the other. Admittedly, if you've put stuff in template1\nthen this might cause problems later, but I think relatively few\npeople do that.\n\n> I'm not at all concerned about the speed after the change - it just\n> seems cleaner and easier to understand not to have exceptions.\n\nWe had these exceptions already, they were just implemented in initdb\nrather than the backend.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 15 Apr 2021 20:10:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Replacing pg_depend PIN entries with a fixed range check" }, { "msg_contents": "I wrote:\n> In view of the discussion at [1], there's more pressure on the OID supply\n> above 10K than I'd realized. While I don't have any good ideas about\n> eliminating the problem altogether, I did have a thought that would remove\n> the extra buffer zone created by my first-draft patch in this thread.\n> Namely, let's have genbki.pl write out its final OID assignment counter\n> value in a command in the postgres.bki file, say \"set_next_oid 12036\".\n> This would cause the bootstrap backend to set the server's OID counter to\n> that value. Then the initial part of initdb's post-bootstrap processing\n> could assign pinned OIDs working forward from there, with no gap. We'd\n> still need a gap before FirstBootstrapObjectId (which we might as well\n> rename to FirstUnpinnedObjectId), but we don't need two gaps, and so this\n> patch wouldn't make things any worse than they are today.\n\nHere's a v2 that does things that way (and is rebased up to HEAD).\nI did some more documentation cleanup, too.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 12 May 2021 18:20:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Replacing pg_depend PIN entries with a fixed range check" }, { "msg_contents": "On Wed, May 12, 2021 at 6:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Here's a v2 that does things that way (and is rebased up to HEAD).\n> I did some more documentation cleanup, too.\n\nThe first hunk of the patch seems to back away from the idea that the\ncutoff is 13000, but the second half of the patch says 13000 still\nmatters. Not sure I understand what's going on there exactly.\n\nI suggest deleting the words \"An additional thing that is useful to\nknow is that\" because the rest of the sentence is fine without it.\n\nI'm sort of wondering what we think the long term plan ought to be.\nAre there some categories of things we should be looking to move out\nof the reserved OID space to keep it from filling up? Can we\nrealistically think of moving the 16384 boundary?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 26 May 2021 11:11:26 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replacing pg_depend PIN entries with a fixed range check" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> The first hunk of the patch seems to back away from the idea that the\n> cutoff is 13000, but the second half of the patch says 13000 still\n> matters. Not sure I understand what's going on there exactly.\n\nNot sure exactly what you're looking at, but IIRC there is a place\nwhere the patch is cleaning up after ab596105b's failure to adjust\nbki.sgml to match its change of FirstBootstrapObjectId from 12000\nto 13000. I hadn't bothered to fix that separately, but I guess\nwe should do so, else v14 is going to ship with incorrect docs.\n\n> I'm sort of wondering what we think the long term plan ought to be.\n> Are there some categories of things we should be looking to move out\n> of the reserved OID space to keep it from filling up? Can we\n> realistically think of moving the 16384 boundary?\n\nI haven't got any wonderful ideas there. I do not see how we can\nmove the 16384 boundary without breaking pg_upgrade'ing, because\npg_upgrade relies on preserving user object OIDs that are likely\nto be not much above that value. Probably, upping\nFirstNormalObjectId ought to be high on our list of things to do\nif we ever do force an on-disk compatibility break. In the\nmeantime, we could decrease the 10000 boundary if things get\ntight above that, but I fear that would annoy some extension\nmaintainers.\n\nAnother idea is to give up the principle that initdb-time OIDs\nneed to be globally unique. They only really need to be\nunique within their own catalogs, so we could buy a lot of space\nby exploiting that. The original reason for that policy was to\nreduce the risk of mistakes in handwritten OID references in\nthe initial catalog data --- but now that numeric references\nthere are Not Done, it seems like we don't really need that.\n\nAn intermediate step, perhaps, could be to give up that\nuniqueness only for OIDs assigned by genbki.pl itself, while\nkeeping it for OIDs below 10000. This'd be appealing if we\nfind that we're getting tight between 10K and 13K.\n\nIn any case it doesn't seem like the issue is entirely pressing\nyet. Although ... maybe we should do that last bit now, so\nthat we can revert FirstBootstrapObjectId to 12K before v14\nships? I've felt a little bit of worry that that change might\ncause problems on machines with a boatload of locales.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 May 2021 11:37:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Replacing pg_depend PIN entries with a fixed range check" }, { "msg_contents": "On Wed, May 26, 2021 at 11:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> In any case it doesn't seem like the issue is entirely pressing\n> yet. Although ... maybe we should do that last bit now, so\n> that we can revert FirstBootstrapObjectId to 12K before v14\n> ships? I've felt a little bit of worry that that change might\n> cause problems on machines with a boatload of locales.\n\nI think that particular case is definitely worth worrying about. Most\nof what we put into the system catalogs is our own hand-crafted\nentries, but that's coming from the operating system and we have no\ncontrol over it whatever. It wouldn't be very nice to have to suggest\nto users who get can't initdb that perhaps they should delete some\nlocales...\n\nHonestly, it seems odd to me that these entries use reserved OIDs\nrather than regular ones at all. Why does the first run of\npg_import_system_collations use special magic OIDs, and later runs use\nregular OIDs? pg_type OIDs need to remain stable from release to\nrelease since it's part of the on disk format for arrays, and pg_proc\nOIDs have to be the same at compile time and initdb time because of\nthe fmgr hash table, and any other thing that has a constant that\nmight be used in the source code also has that issue. But none of this\napplies to collations: they can't expected to have the same OID from\nrelease to release, or even from one installation to another; the\nsource code can't be relying on the specific values; and we have no\nidea how many there might be.\n\nSo I think your proposal of allowing genbki-assigned OIDs to be reused\nin different catalogs is probably a pretty good one, but I wonder if\nwe could just rejigger things so that collations just get normal OIDs\n> 16384.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 26 May 2021 12:35:01 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replacing pg_depend PIN entries with a fixed range check" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> So I think your proposal of allowing genbki-assigned OIDs to be reused\n> in different catalogs is probably a pretty good one, but I wonder if\n> we could just rejigger things so that collations just get normal OIDs\n> > 16384.\n\nHm. I can't readily think of a non-hack way of making that happen.\nIt's also unclear to me how it'd interact with assignment of OIDs\nto regular user objects. Maybe we'll have to go there eventually,\nbut I'm not in a hurry to.\n\nMeanwhile, I'll draft a patch for the other thing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 May 2021 12:45:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Replacing pg_depend PIN entries with a fixed range check" }, { "msg_contents": "I wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> The first hunk of the patch seems to back away from the idea that the\n>> cutoff is 13000, but the second half of the patch says 13000 still\n>> matters. Not sure I understand what's going on there exactly.\n\n> Not sure exactly what you're looking at, but IIRC there is a place\n> where the patch is cleaning up after ab596105b's failure to adjust\n> bki.sgml to match its change of FirstBootstrapObjectId from 12000\n> to 13000. I hadn't bothered to fix that separately, but I guess\n> we should do so, else v14 is going to ship with incorrect docs.\n\nI take that back: I had committed that doc fix, in 1f9b0e693, so\nI'm still unsure what was confusing you. (But a4390abec just\nreverted it, anyway.)\n\nAttached is a rebase over a4390abec. The decision in that commit\nto not expect global uniqueness of OIDs above 10K frees us to use\na much simpler solution than before: we can just go ahead and start\nthe backend's OID counter at 10000, and not worry about conflicts,\nbecause the OID generation logic can deal with any conflicts just\nfine as long as you're okay with only having per-catalog uniqueness.\nSo this gets rid of the set_next_oid mechanism that I'd invented in\nv2, and yet there's still no notable risk of running out of OIDs in\nthe 10K-12K range.\n\nWhile testing this, I discovered something that I either never knew\nor had forgotten: the bootstrap backend is itself assigning some\nOIDs, specifically OIDs for the composite types associated with most\nof the system catalogs (plus their array types). I find this scary,\nbecause it is happening before we've built the catalog indexes, so\nit's impossible to ensure uniqueness. (Of course, when we do build\nthe indexes, we'd notice any conflicts; but that's not a solution.)\nI think it accidentally works because we don't ask genbki.pl to\nassign any pg_type OIDs, but that seems fragile. Seems like maybe\nwe should fix genbki.pl to assign those OIDs, and then change\nGetNewOidWithIndex to error out in bootstrap mode. However that's a\npre-existing issue, so I don't feel that this patch needs to be\nthe one to fix it.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 27 May 2021 18:53:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Replacing pg_depend PIN entries with a fixed range check" }, { "msg_contents": "On Thu, May 27, 2021 at 6:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Attached is a rebase over a4390abec.\n\nLooks good to me overall, I just had a couple questions/comments:\n\nisObjectPinned and isSharedObjectPinned are now thin wrappers around\nIsPinnedObject. Is keeping those functions a matter of future-proofing in\ncase something needs to be handled differently someday, or reducing\nunnecessary code churn?\n\nsetup_depend now doesn't really need to execute any SQL (unless third-party\nforks have extra steps here?), and could be replaced with a direct call\nto StopGeneratingPinnedObjectIds. That's a bit more self-documenting, and\nthat would allow shortening this comment:\n\n /*\n* Note that no objects created after setup_depend() will be \"pinned\".\n* They are all droppable at the whim of the DBA.\n*/\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, May 27, 2021 at 6:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:> Attached is a rebase over a4390abec.Looks good to me overall, I just had a couple questions/comments:isObjectPinned and isSharedObjectPinned are now thin wrappers around IsPinnedObject. Is keeping those functions a matter of future-proofing in case something needs to be handled differently someday, or reducing unnecessary code churn?setup_depend now doesn't really need to execute any SQL (unless third-party forks have extra steps here?), and could be replaced with a direct call to StopGeneratingPinnedObjectIds. That's a bit more self-documenting, and that would allow shortening this comment: /*\t * Note that no objects created after setup_depend() will be \"pinned\".\t * They are all droppable at the whim of the DBA.\t */--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 14 Jul 2021 13:56:15 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Replacing pg_depend PIN entries with a fixed range check" }, { "msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> On Thu, May 27, 2021 at 6:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Attached is a rebase over a4390abec.\n\n> Looks good to me overall, I just had a couple questions/comments:\n\nThanks for looking!\n\n> isObjectPinned and isSharedObjectPinned are now thin wrappers around\n> IsPinnedObject. Is keeping those functions a matter of future-proofing in\n> case something needs to be handled differently someday, or reducing\n> unnecessary code churn?\n\nYeah, it was mostly a matter of reducing code churn. We could probably\ndrop isSharedObjectPinned altogether, but isObjectPinned seems to have\nsome notational value in providing an API that takes an ObjectAddress.\n\n> setup_depend now doesn't really need to execute any SQL (unless third-party\n> forks have extra steps here?), and could be replaced with a direct call\n> to StopGeneratingPinnedObjectIds. That's a bit more self-documenting, and\n> that would allow shortening this comment:\n\nHm, I'm not following? setup_depend runs in initdb, that is on the\nclient side. It can't invoke backend-internal functions any other\nway than via SQL, AFAICS.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 14 Jul 2021 15:34:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Replacing pg_depend PIN entries with a fixed range check" }, { "msg_contents": "On Wed, Jul 14, 2021 at 3:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Hm, I'm not following? setup_depend runs in initdb, that is on the\n> client side. It can't invoke backend-internal functions any other\n> way than via SQL, AFAICS.\n\nAh, brainfade on my part.\n\nI was also curious about the test case where Andres fixed a regression in\nthe parent thread [1], and there is a noticeable improvement (lowest of 10\nmeasurements):\n\nHEAD: 623ms\npatch: 567ms\n\nIf no one else has anything, I think this is ready for commit.\n\n[1]\nhttps://www.postgresql.org/message-id/20210406043521.lopeo7bbigad3n6t%40alap3.anarazel.de\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Jul 14, 2021 at 3:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:> Hm, I'm not following?  setup_depend runs in initdb, that is on the> client side.  It can't invoke backend-internal functions any other> way than via SQL, AFAICS.Ah, brainfade on my part.I was also curious about the test case where Andres fixed a regression in the parent thread [1], and there is a noticeable improvement (lowest of 10 measurements):HEAD: 623mspatch: 567msIf no one else has anything, I think this is ready for commit.[1] https://www.postgresql.org/message-id/20210406043521.lopeo7bbigad3n6t%40alap3.anarazel.de--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 14 Jul 2021 16:10:26 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Replacing pg_depend PIN entries with a fixed range check" }, { "msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> If no one else has anything, I think this is ready for commit.\n\nPushed, after adopting the suggestion to dispense with\nisSharedObjectPinned.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 15 Jul 2021 11:43:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Replacing pg_depend PIN entries with a fixed range check" } ]
[ { "msg_contents": "Hi,\n\nI am returning back to implementation of schema variables. The schema\nvariables can be used as an alternative to package variables (Oracle's\nPL/SQL or ADA). The schema variables can be used as fast and safe storage\nof session information for RLS too.\n\nThe previous implementation had not cleanly implemented execution of the\nLET statement. It was something between query and utility, and although it\nwas working - it was out of Postgres concept (with different implementation\nof queries and utilities).\n\nI totally rewrote the implementation of the LET statement. I prepared two\nvariants:\n\nFirst variant is based on the introduction of the new command type CMD_LET\nand new very small executor node SetVariable (this is a very very reduced\nanalogy of ModifyTable node). The code is consistent and what is important\n- the LET statement can be prepared. The execution is relatively fast from\nPLpgSQL too. Without any special support the execution has the same speed\nlike non simple queries. The statement reuses an execution plan, but\nsimple execution is not supported.\n\nSecond variant is implemented like a classic utility command. There is not\nany surprise. It is shorter, simple, but the LET statement cannot be\nprepared (this is the limit of all utility statements). Without special\nsupport in PLpgSQL the execution is about 10x slower than the execution of\nthe first variant. But there is a new possibility of using the main parser\nfrom PLpgSQL (implemented by Tom for new implementation of assign statement\nin pg 14), and then this support in plpgsql requires only a few lines).\nWhen the statement LET is explicitly supported by PLpgSQL, then execution\nis very fast (the speed is comparable with the speed of the assign\nstatement) - it is about 10x faster than the first variant.\n\nI tested code\n\ndo $$\ndeclare x int ;\nbegin\n for i in 1..1000000\n loop\n let ooo = i;\n end loop;\nend;\n$$;\n\nvariant 1 .. 1500 ms\nvariant 2 with PLpgSQL support .. 140 ms\nvariant 2 without PLpgSQL support 9000 ms\n\nThe slower speed of the first variant from PLpgSQL can be fixed. But for\nthis moment, the speed is good enough. This is the worst case, because in\nthe first variant LET statement cannot use optimization for simple query\nevaluation (now).\n\nNow I think so implementation is significantly cleaner, and I hope so it\nwill be more acceptable for committers.\n\nI am starting a new thread, because this is a new implementation, and\nbecause I am sending two alternative implementations of one functionality.\n\nComments, notes, objections?\n\nRegards\n\nPavel", "msg_date": "Thu, 15 Apr 2021 10:42:42 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Schema variables - new implementation for Postgres 15" }, { "msg_contents": "From: Pavel Stehule <pavel.stehule@gmail.com>\r\n--------------------------------------------------\r\ndo $$\r\ndeclare x int ;\r\nbegin\r\n for i in 1..1000000\r\n loop\r\n let ooo = i;\r\n end loop;\r\nend;\r\n$$;\r\n\r\nvariant 1 .. 1500 ms\r\nvariant 2 with PLpgSQL support .. 140 ms\r\nvariant 2 without PLpgSQL support 9000 ms\r\n--------------------------------------------------\r\n\r\n\r\nThat's impressive! But 1 million times of variable assignment took only 140 ms? It's that one assignment took only 140 nanosecond, which is near one DRAM access? Can PL/pgSQL processing be really so fast?\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\n\n\n\n\n\n\n\n\n\n\n\nFrom: Pavel Stehule <pavel.stehule@gmail.com>\r\n\n--------------------------------------------------\ndo $$\ndeclare x int ;\nbegin\n  for i in 1..1000000\n  loop\n    let ooo = i;\n  end loop;\nend;\n$$;\n \nvariant 1 .. 1500 ms\nvariant 2 with PLpgSQL support .. 140 ms\nvariant 2 without PLpgSQL support 9000 ms\n--------------------------------------------------\n \n \nThat's impressive!  But 1 million times of variable assignment took only 140 ms?  It's that one assignment took only 140 nanosecond, which is near\r\n one DRAM access?  Can PL/pgSQL processing be really so fast?\n \n \nRegards\nTakayuki Tsunakawa", "msg_date": "Thu, 15 Apr 2021 16:02:45 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "čt 15. 4. 2021 v 18:02 odesílatel tsunakawa.takay@fujitsu.com <\ntsunakawa.takay@fujitsu.com> napsal:\n\n> From: Pavel Stehule <pavel.stehule@gmail.com>\n>\n> --------------------------------------------------\n>\n> do $$\n>\n> declare x int ;\n>\n> begin\n>\n> for i in 1..1000000\n>\n> loop\n>\n> let ooo = i;\n>\n> end loop;\n>\n> end;\n>\n> $$;\n>\n>\n>\n> variant 1 .. 1500 ms\n>\n> variant 2 with PLpgSQL support .. 140 ms\n>\n> variant 2 without PLpgSQL support 9000 ms\n>\n> --------------------------------------------------\n>\n>\n>\n>\n>\n> That's impressive! But 1 million times of variable assignment took only\n> 140 ms? It's that one assignment took only 140 nanosecond, which is near\n> one DRAM access? Can PL/pgSQL processing be really so fast?\n>\n\nIn this case the PLpgSQL can be very fast - and after changes in pg 13, the\nPLpgSQL is not significantly slower than Lua or than PHP.\n\nEvery body can repeat these tests - I did it on my Lenovo T520 notebook\n\nPavel\n\n\n\n>\n>\n>\n> Regards\n>\n> Takayuki Tsunakawa\n>\n>\n>\n\nčt 15. 4. 2021 v 18:02 odesílatel tsunakawa.takay@fujitsu.com <tsunakawa.takay@fujitsu.com> napsal:\n\n\n\n\n\nFrom: Pavel Stehule <pavel.stehule@gmail.com>\n\n--------------------------------------------------\ndo $$\ndeclare x int ;\nbegin\n  for i in 1..1000000\n  loop\n    let ooo = i;\n  end loop;\nend;\n$$;\n \nvariant 1 .. 1500 ms\nvariant 2 with PLpgSQL support .. 140 ms\nvariant 2 without PLpgSQL support 9000 ms\n--------------------------------------------------\n \n \nThat's impressive!  But 1 million times of variable assignment took only 140 ms?  It's that one assignment took only 140 nanosecond, which is near\n one DRAM access?  Can PL/pgSQL processing be really so fast?In this case the PLpgSQL can be very fast - and after changes in pg 13, the PLpgSQL is not significantly slower than Lua or than PHP.Every body can repeat these tests - I did it on my Lenovo T520 notebookPavel\n \n \nRegards\nTakayuki Tsunakawa", "msg_date": "Thu, 15 Apr 2021 18:11:40 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "čt 15. 4. 2021 v 10:42 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi,\n>\n> I am returning back to implementation of schema variables. The schema\n> variables can be used as an alternative to package variables (Oracle's\n> PL/SQL or ADA). The schema variables can be used as fast and safe storage\n> of session information for RLS too.\n>\n> The previous implementation had not cleanly implemented execution of the\n> LET statement. It was something between query and utility, and although it\n> was working - it was out of Postgres concept (with different implementation\n> of queries and utilities).\n>\n> I totally rewrote the implementation of the LET statement. I prepared two\n> variants:\n>\n> First variant is based on the introduction of the new command type CMD_LET\n> and new very small executor node SetVariable (this is a very very reduced\n> analogy of ModifyTable node). The code is consistent and what is important\n> - the LET statement can be prepared. The execution is relatively fast from\n> PLpgSQL too. Without any special support the execution has the same speed\n> like non simple queries. The statement reuses an execution plan, but\n> simple execution is not supported.\n>\n> Second variant is implemented like a classic utility command. There is not\n> any surprise. It is shorter, simple, but the LET statement cannot be\n> prepared (this is the limit of all utility statements). Without special\n> support in PLpgSQL the execution is about 10x slower than the execution of\n> the first variant. But there is a new possibility of using the main parser\n> from PLpgSQL (implemented by Tom for new implementation of assign statement\n> in pg 14), and then this support in plpgsql requires only a few lines).\n> When the statement LET is explicitly supported by PLpgSQL, then execution\n> is very fast (the speed is comparable with the speed of the assign\n> statement) - it is about 10x faster than the first variant.\n>\n> I tested code\n>\n> do $$\n> declare x int ;\n> begin\n> for i in 1..1000000\n> loop\n> let ooo = i;\n> end loop;\n> end;\n> $$;\n>\n> variant 1 .. 1500 ms\n> variant 2 with PLpgSQL support .. 140 ms\n> variant 2 without PLpgSQL support 9000 ms\n>\n> The slower speed of the first variant from PLpgSQL can be fixed. But for\n> this moment, the speed is good enough. This is the worst case, because in\n> the first variant LET statement cannot use optimization for simple query\n> evaluation (now).\n>\n> Now I think so implementation is significantly cleaner, and I hope so it\n> will be more acceptable for committers.\n>\n> I am starting a new thread, because this is a new implementation, and\n> because I am sending two alternative implementations of one functionality.\n>\n> Comments, notes, objections?\n>\n>\nI am sending only one patch and I assign this thread to commitfest\napplication\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n>\n>", "msg_date": "Fri, 16 Apr 2021 05:32:26 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Thu, Apr 15, 2021, at 10:42, Pavel Stehule wrote:\n> *Attachments:*\n> * schema-variables-v-execnode-2021-01.patch\n> * schema-variables-v-utility-2021-01.patch\n\nApplications are currently know to be misusing set_config()+current_setting() to pass information in a session or transaction.\n\nSuch users might be interested in using Schema variables as a better replacement.\n\nHowever, since set_config() is transactional, it can't be used as a drop-in replacement:\n\n+ <para>\n+ The value of a schema variable is local to the current session. Retrieving\n+ a variable's value returns either a NULL or a default value, unless its value\n+ is set to something else in the current session with a LET command. The content\n+ of a variable is not transactional. This is the same as in regular variables\n+ in PL languages.\n+ </para>\n\nI think the \"The content of a variable is not transactional.\" part is therefore a bad idea.\n\nAnother pattern is to use TEMP TABLEs to pass around information in a session or transaction.\nIf the LET command would be transactional, it could be used as a drop-in replacement for such use-cases as well.\n\nFurthermore, I think a non-transactional LET command would be insidious,\nsince it looks like any other SQL command, all of which are transactional.\n(The ones that aren't such as REINDEX CONCURRENTLY will properly throw an error if run inside a transaction block.)\n\nA non-transactional LET command IMO would be non-SQL-idiomatic and non-intuitive.\n\n/Joel\n\n\n\nOn Thu, Apr 15, 2021, at 10:42, Pavel Stehule wrote:Attachments:schema-variables-v-execnode-2021-01.patchschema-variables-v-utility-2021-01.patchApplications are currently know to be misusing set_config()+current_setting() to pass information in a session or transaction.Such users might be interested in using Schema variables as a better replacement.However, since set_config() is transactional, it can't be used as a drop-in replacement:+   <para>+    The value of a schema variable is local to the current session. Retrieving+    a variable's value returns either a NULL or a default value, unless its value+    is set to something else in the current session with a LET command. The content+    of a variable is not transactional. This is the same as in regular variables+    in PL languages.+   </para>I think the \"The content of a variable is not transactional.\" part is therefore a bad idea.Another pattern is to use TEMP TABLEs to pass around information in a session or transaction.If the LET command would be transactional, it could be used as a drop-in replacement for such use-cases as well.Furthermore, I think a non-transactional LET command would be insidious,since it looks like any other SQL command, all of which are transactional.(The ones that aren't such as REINDEX CONCURRENTLY will properly throw an error if run inside a transaction block.)A non-transactional LET command IMO would be non-SQL-idiomatic and non-intuitive./Joel", "msg_date": "Fri, 16 Apr 2021 08:06:48 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "pá 16. 4. 2021 v 8:07 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Thu, Apr 15, 2021, at 10:42, Pavel Stehule wrote:\n>\n> *Attachments:*\n>\n> - schema-variables-v-execnode-2021-01.patch\n> - schema-variables-v-utility-2021-01.patch\n>\n>\n> Applications are currently know to be misusing\n> set_config()+current_setting() to pass information in a session or\n> transaction.\n>\n> Such users might be interested in using Schema variables as a better\n> replacement.\n>\n> However, since set_config() is transactional, it can't be used as a\n> drop-in replacement:\n>\n> + <para>\n> + The value of a schema variable is local to the current session.\n> Retrieving\n> + a variable's value returns either a NULL or a default value, unless\n> its value\n> + is set to something else in the current session with a LET command.\n> The content\n> + of a variable is not transactional. This is the same as in regular\n> variables\n> + in PL languages.\n> + </para>\n>\n> I think the \"The content of a variable is not transactional.\" part is\n> therefore a bad idea.\n>\n> Another pattern is to use TEMP TABLEs to pass around information in a\n> session or transaction.\n> If the LET command would be transactional, it could be used as a drop-in\n> replacement for such use-cases as well.\n>\n> Furthermore, I think a non-transactional LET command would be insidious,\n> since it looks like any other SQL command, all of which are transactional.\n> (The ones that aren't such as REINDEX CONCURRENTLY will properly throw an\n> error if run inside a transaction block.)\n>\n> A non-transactional LET command IMO would be non-SQL-idiomatic and\n> non-intuitive.\n>\n\nI am sorry, but in this topic we have different opinions. The variables in\nPLpgSQL are not transactional too (same is true in Perl, Python, ...).\nSession variables in Oracle, MS SQL, DB2, MySQL are not transactional too.\nMy primary focus is PLpgSQL - and I would like to use schema variables as\nglobal plpgsql variables (from PLpgSQL perspective) - that means in\nPostgres's perspective session variables. But in Postgres, I have to write\nfeatures that will work with others PL too - PLPython, PLPerl, ...\nStatement SET in ANSI/SQL standard (SQL/PSM) doesn't expect transactional\nbehaviour for variables too. Unfortunately SET keyword is used in Postgres\nfor GUC, and isn't possible to reuse without a compatibility break.\n\nThe PostgreSQL configuration is transactional, but it is a different\nfeature designed for different purposes. Using GUC like session variables\nis just a workaround. It can be useful for some cases, sure. But it is not\nusual behaviour. And for other cases the transactional behaviour is not\npractical. Schema variables are not replacement of GUC, schema variables\nare not replacement of temporal tables. There is a prepared patch for\nglobal temp tables. I hope so this patch can be committed to Postgres 15.\nGlobal temp tables fixes almost all disadvantages of temporary tables in\nPostgres. So the schema variable is not a one row table. It is a different\ncreature - designed to support the server's side procedural features.\n\nI have prepared a patch that allows non default transactional behaviour\n(but this behaviour should not be default - I didn't design schema\nvariables as temp tables replacement). This patch increases the length of\nthe current patch about 1/4, and I have enough work with rebasing with the\ncurrent patch, so I didn't send it to commitfest now. If schema variables\nwill be inside core, this day I'll send the patch that allows transactional\nbehaviour for schema variables - I promise.\n\nRegards\n\nPavel\n\n\n\n\n> /Joel\n>\n>\n>\n>\n\npá 16. 4. 2021 v 8:07 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Thu, Apr 15, 2021, at 10:42, Pavel Stehule wrote:Attachments:schema-variables-v-execnode-2021-01.patchschema-variables-v-utility-2021-01.patchApplications are currently know to be misusing set_config()+current_setting() to pass information in a session or transaction.Such users might be interested in using Schema variables as a better replacement.However, since set_config() is transactional, it can't be used as a drop-in replacement:+   <para>+    The value of a schema variable is local to the current session. Retrieving+    a variable's value returns either a NULL or a default value, unless its value+    is set to something else in the current session with a LET command. The content+    of a variable is not transactional. This is the same as in regular variables+    in PL languages.+   </para>I think the \"The content of a variable is not transactional.\" part is therefore a bad idea.Another pattern is to use TEMP TABLEs to pass around information in a session or transaction.If the LET command would be transactional, it could be used as a drop-in replacement for such use-cases as well.Furthermore, I think a non-transactional LET command would be insidious,since it looks like any other SQL command, all of which are transactional.(The ones that aren't such as REINDEX CONCURRENTLY will properly throw an error if run inside a transaction block.)A non-transactional LET command IMO would be non-SQL-idiomatic and non-intuitive.I am sorry, but in this topic we have different opinions. The variables in PLpgSQL are not transactional too (same is true in Perl, Python, ...). Session variables in Oracle, MS SQL, DB2, MySQL are not transactional too. My primary focus is PLpgSQL - and I would like to use schema variables as global plpgsql variables (from PLpgSQL perspective) - that means in Postgres's perspective session variables. But in Postgres, I have to write features that will work with others PL too - PLPython, PLPerl, ... Statement SET in ANSI/SQL standard (SQL/PSM) doesn't expect transactional behaviour for variables too. Unfortunately SET keyword is used in Postgres for GUC, and isn't possible to reuse without a compatibility break. The PostgreSQL configuration is transactional, but it is a different feature designed for different purposes. Using GUC like session variables is just a workaround. It can be useful for some cases, sure. But it is not usual behaviour. And for other cases the transactional behaviour is not practical. Schema variables are not replacement of GUC, schema variables are not replacement of temporal tables. There is a prepared patch for global temp tables. I hope so this patch can be committed to Postgres 15. Global temp tables fixes almost all disadvantages of temporary tables in Postgres. So the schema variable is not a one row table. It is a different creature - designed to support the server's side procedural features.I have prepared a patch that allows non default transactional behaviour (but this behaviour should not be default - I didn't design schema variables as temp tables replacement). This patch increases the length of the current patch about 1/4, and I have enough work with rebasing with the current patch, so I didn't send it to commitfest now. If schema variables will be inside core, this day I'll send the patch that allows transactional behaviour for schema variables - I promise.RegardsPavel /Joel", "msg_date": "Fri, 16 Apr 2021 08:40:42 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "From: Pavel Stehule <pavel.stehule@gmail.com>\r\n--------------------------------------------------\r\nI am sorry, but in this topic we have different opinions. The variables in PLpgSQL are not transactional too (same is true in Perl, Python, ...). Session variables in Oracle, MS SQL, DB2, MySQL are not transactional too. My primary focus is PLpgSQL - and I would like to use schema variables as global plpgsql variables (from PLpgSQL perspective) - that means in Postgres's perspective session variables. But in Postgres, I have to write features that will work with others PL too - PLPython, PLPerl, ... Statement SET in ANSI/SQL standard (SQL/PSM) doesn't expect transactional behaviour for variables too. Unfortunately SET keyword is used in Postgres for GUC, and isn't possible to reuse without a compatibility break.\r\n\r\nThe PostgreSQL configuration is transactional, but it is a different feature designed for different purposes. Using GUC like session variables is just a workaround. It can be useful for some cases, sure. But it is not usual behaviour. And for other cases the transactional behaviour is not practical. Schema variables are not replacement of GUC, schema variables are not replacement of temporal tables. There is a prepared patch for global temp tables. I hope so this patch can be committed to Postgres 15. Global temp tables fixes almost all disadvantages of temporary tables in Postgres. So the schema variable is not a one row table. It is a different creature - designed to support the server's side procedural features.\r\n--------------------------------------------------\r\n\r\n+1\r\nI understand (and wish) this feature is intended to ease migration from Oracle PL/SQL, which will further increase the popularity of Postgres. So, the transactional behavior is not necessary unless Oracle has such a feature.\r\n\r\nFurthermore, Postgres already has some non-transactonal SQL commands. So, I don't think we need to reject non-transactional LET.\r\n\r\n* Sequence operation: SELECT nextval/setval\r\n* SET [SESSION]\r\n* SET ROLE\r\n* SET SESSION AUTHORIZATION\r\n\r\n\r\n--------------------------------------------------\r\nI have prepared a patch that allows non default transactional behaviour (but this behaviour should not be default - I didn't design schema variables as temp tables replacement). This patch increases the length of the current patch about 1/4, and I have enough work with rebasing with the current patch, so I didn't send it to commitfest now. If schema variables will be inside core, this day I'll send the patch that allows transactional behaviour for schema variables - I promise.\r\n--------------------------------------------------\r\n\r\nI prefer the simpler, targeted one without transactional behavior initially, because added complexity might prevent this feature from being committed in PG 15.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nFrom: Pavel Stehule <pavel.stehule@gmail.com>\r\n\n--------------------------------------------------\nI am sorry, but in this topic we have different opinions. The variables in PLpgSQL are not transactional too (same is true in Perl, Python, ...). Session\r\n variables in Oracle, MS SQL, DB2, MySQL are not transactional too. My primary focus is PLpgSQL - and I would like to use schema variables as global plpgsql variables (from PLpgSQL perspective) - that means in Postgres's perspective session variables. But in\r\n Postgres, I have to write features that will work with others PL too - PLPython, PLPerl, ... Statement SET in ANSI/SQL standard (SQL/PSM) doesn't expect transactional behaviour for variables too. Unfortunately SET keyword is used in Postgres for GUC, and isn't\r\n possible to reuse without a compatibility break. \n \nThe PostgreSQL configuration is transactional, but it is a different feature designed for different purposes. Using GUC like session variables is just\r\n a workaround. It can be useful for some cases, sure. But it is not usual behaviour. And for other cases the transactional behaviour is not practical. Schema variables are not replacement of GUC, schema variables are not replacement of temporal tables. There\r\n is a prepared patch for global temp tables. I hope so this patch can be committed to Postgres 15. Global temp tables fixes almost all disadvantages of temporary tables in Postgres. So the schema variable is not a one row table. It is a different creature -\r\n designed to support the server's side procedural features.\n--------------------------------------------------\n \n+1\nI understand (and wish) this feature is intended to ease migration from Oracle PL/SQL, which will further increase the popularity of Postgres.  So,\r\n the transactional behavior is not necessary unless Oracle has such a feature.\n \nFurthermore, Postgres already has some non-transactonal SQL commands.  So, I don't think we need to reject non-transactional LET.\n \n* Sequence operation: SELECT nextval/setval\n* SET [SESSION]\n* SET ROLE\n* SET SESSION AUTHORIZATION\n \n \n--------------------------------------------------\nI have prepared a patch that allows non default transactional behaviour (but this behaviour should not be default - I didn't design schema variables\r\n as temp tables replacement). This patch increases the length of the current patch about 1/4, and I have enough work with rebasing with the current patch, so I didn't send it to commitfest now. If schema variables will be inside core, this day I'll send the\r\n patch that allows transactional behaviour for schema variables - I promise.\n--------------------------------------------------\n \nI prefer the simpler, targeted one without transactional behavior initially, because added complexity might prevent this feature from being committed\r\n in PG 15.\n \n \nRegards\nTakayuki Tsunakawa", "msg_date": "Fri, 16 Apr 2021 07:00:39 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nonly rebase\n\nRegards\n\nPavel", "msg_date": "Wed, 12 May 2021 06:17:03 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "st 12. 5. 2021 v 6:17 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> only rebase\n>\n\nsecond try - rebase after serial_scheduler remove\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>", "msg_date": "Wed, 12 May 2021 07:37:14 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "st 12. 5. 2021 v 7:37 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> st 12. 5. 2021 v 6:17 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n>> Hi\n>>\n>> only rebase\n>>\n>\n> second try - rebase after serial_scheduler remove\n>\n\nonly rebase\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n>\n>> Regards\n>>\n>> Pavel\n>>\n>", "msg_date": "Mon, 17 May 2021 11:04:59 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nrebase\n\nRegards\n\nPavel", "msg_date": "Sat, 12 Jun 2021 08:00:05 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "so 12. 6. 2021 v 8:00 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> rebase\n>\n>\nrebase only\n\nRegards\n\nPavel\n\n\nRegards\n>\n> Pavel\n>", "msg_date": "Fri, 2 Jul 2021 13:29:27 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nReview resume:\n\n\nThis patch implements Schema Variables that are database objects that\ncan hold a single or composite value following the data type used at\nvariable declaration. Schema variables, like relations, exist within a\nschema and their access is controlled via GRANT and REVOKE commands. The\nschema variable can be created by the CREATE VARIABLE command, altered\nusing ALTER VARIABLE and removed using DROP VARIABLE.\n\nThe value of a schema variable is local to the current session.\nRetrieving a variable's value returns either a NULL or a default value,\nunless its value is set to something else in the current session with a\nLET command. The content of a variable is not transactional. This is the\nsame as in regular variables in PL languages.\n\nSchema variables are retrieved by the SELECT SQL command. Their value is\nset with the LET SQL command. While schema variables share properties\nwith tables, their value cannot be updated with an UPDATE command.\n\n\nThe patch apply with the patch command without problem and compilation\nreports no warning or errors. Regression tests pass successfully using\nmake check or make installcheck\nIt also includes all documentation and regression tests.\n\nPerformances are near the set of plpgsql variable settings which is\nimpressive:\n\ndo $$\ndeclare var1 int ; i int;\nbegin\n  for i in 1..1000000\n  loop\n    var1 := i;\n  end loop;\nend;\n$$;\nDO\nTime: 71,515 ms\n\nCREATE VARIABLE var1 AS integer;\ndo $$\ndeclare i int ;\nbegin\n  for i in 1..1000000\n  loop\n    let var1 = i;\n  end loop;\nend;\n$$;\nDO\nTime: 94,658 ms\n\nThere is just one thing that puzzles me.We can use :\n\n    CREATE VARIABLE var1 AS date NOT NULL;\n    postgres=# SELECT var1;\n    ERROR:  null value is not allowed for NOT NULL schema variable \"var1\"\n\nwhich I understand and is the right behavior. But if we use:\n\n    CREATE IMMUTABLE VARIABLE var1 AS date NOT NULL;\n    postgres=# SELECT var1;\n    ERROR:  null value is not allowed for NOT NULL schema variable \"var1\"\n    DETAIL:  The schema variable was not initialized yet.\n    postgres=# LET var1=current_date;\n    ERROR:  schema variable \"var1\" is declared IMMUTABLE\n\nIt should probably be better to not allow NOT NULL when IMMUTABLE is\nused because the variable can not be used at all.  Also probably\nIMMUTABLE without a DEFAULT value should also be restricted as it makes\nno sens. If the user wants the variable to be NULL he must use DEFAULT\nNULL. This is just a though, the above error messages are explicit and\nthe user can understand what wrong declaration he have done.\n\nExcept that I think this patch is ready for committers, so if there is\nno other opinion in favor of restricting the use of IMMUTABLE with NOT\nNULL and DEFAULT I will change the status to ready for committers.\n\n-- \nGilles Darold\nhttp://www.darold.net/\n\n\n\n\n\n\n\nHi,\n\n\nReview resume:\n\n\n\n\nThis patch implements Schema Variables\n that are database objects that can hold a single or composite\n value following the data type used at variable declaration. Schema\n variables, like relations, exist within a schema and their access\n is controlled via GRANT and REVOKE commands. The schema variable\n can be created by the CREATE VARIABLE command, altered using ALTER\n VARIABLE and removed using DROP VARIABLE.\n The value of a schema variable is local to the current\n session. Retrieving a variable's value returns either a NULL or\n a default value, unless its value is set to something else in\n the current session with a LET command. The content of a\n variable is not transactional. This is the same as in regular\n variables in PL languages. \n Schema variables are retrieved by the SELECT SQL command.\n Their value is set with the LET SQL command. While schema\n variables share properties with tables, their value cannot be\n updated with an UPDATE command.\n \n\n\n\nThe\n patch apply with the patch command without problem and\n compilation reports no warning or errors. Regression tests\n pass successfully using make check or make installcheck\n\nIt\n also includes all documentation and regression tests.\n\n\n\nPerformances\n are near the set of plpgsql variable settings which is\n impressive:\n\n\ndo\n $$\n declare var1 int ; i int;\n begin\n   for i in 1..1000000\n   loop\n     var1 := i;\n   end loop;\n end;\n $$;\n DO\n Time: 71,515 ms\n\n\n\nCREATE\n VARIABLE var1 AS integer;\n\ndo\n $$\n declare i int ;\n begin\n   for i in 1..1000000\n   loop\n     let var1 = i;\n   end loop;\n end;\n $$;\n DO\n Time: 94,658 ms\n\n\nThere\n is just one thing that puzzles me.\n We can use :\n\n\n   \n CREATE VARIABLE var1 AS date NOT NULL;\n   \n postgres=# SELECT var1;\n     ERROR:  null value is not allowed for NOT NULL schema\n variable \"var1\"\n\n\n\n which I understand and is the right behavior. But if we use:\n\n\n\n   \n CREATE IMMUTABLE VARIABLE var1 AS date NOT NULL;\n   \n postgres=# SELECT var1;\n   \n ERROR:  null value is not allowed for NOT NULL schema\n variable \"var1\"\n     DETAIL:  The schema variable was not initialized yet.\n     postgres=# LET var1=current_date;\n     ERROR:  schema variable \"var1\" is declared IMMUTABLE\n\n\n It should probably be better to not allow NOT NULL when IMMUTABLE\n is used because the variable can not be used at all.  Also\n probably IMMUTABLE without a DEFAULT value should also be\n restricted as it makes no sens. If the user wants the variable to\n be NULL he must use DEFAULT NULL. This is just a though, the above\n error messages are explicit and the user can understand what wrong\n declaration he have done.\n\n\nExcept that I\n think this patch is ready for committers, so if there is no\n other opinion in favor of restricting the use of IMMUTABLE\n with NOT NULL and DEFAULT I will change the status to ready\n for committers.\n\n\n-- \nGilles Darold\nhttp://www.darold.net/", "msg_date": "Sat, 28 Aug 2021 11:57:28 +0200", "msg_from": "Gilles Darold <gilles@darold.net>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "so 28. 8. 2021 v 11:57 odesílatel Gilles Darold <gilles@darold.net> napsal:\n\n> Hi,\n>\n> Review resume:\n>\n>\n> This patch implements Schema Variables that are database objects that can\n> hold a single or composite value following the data type used at variable\n> declaration. Schema variables, like relations, exist within a schema and\n> their access is controlled via GRANT and REVOKE commands. The schema\n> variable can be created by the CREATE VARIABLE command, altered using ALTER\n> VARIABLE and removed using DROP VARIABLE.\n>\n> The value of a schema variable is local to the current session. Retrieving\n> a variable's value returns either a NULL or a default value, unless its\n> value is set to something else in the current session with a LET command.\n> The content of a variable is not transactional. This is the same as in\n> regular variables in PL languages.\n>\n> Schema variables are retrieved by the SELECT SQL command. Their value is\n> set with the LET SQL command. While schema variables share properties with\n> tables, their value cannot be updated with an UPDATE command.\n>\n> The patch apply with the patch command without problem and compilation\n> reports no warning or errors. Regression tests pass successfully using make\n> check or make installcheck\n> It also includes all documentation and regression tests.\n>\n> Performances are near the set of plpgsql variable settings which is\n> impressive:\n>\n> do $$\n> declare var1 int ; i int;\n> begin\n> for i in 1..1000000\n> loop\n> var1 := i;\n> end loop;\n> end;\n> $$;\n> DO\n> Time: 71,515 ms\n>\n> CREATE VARIABLE var1 AS integer;\n> do $$\n> declare i int ;\n> begin\n> for i in 1..1000000\n> loop\n> let var1 = i;\n> end loop;\n> end;\n> $$;\n> DO\n> Time: 94,658 ms\n>\n> There is just one thing that puzzles me. We can use :\n>\n> CREATE VARIABLE var1 AS date NOT NULL;\n> postgres=# SELECT var1;\n> ERROR: null value is not allowed for NOT NULL schema variable \"var1\"\n>\n> which I understand and is the right behavior. But if we use:\n>\n> CREATE IMMUTABLE VARIABLE var1 AS date NOT NULL;\n> postgres=# SELECT var1;\n> ERROR: null value is not allowed for NOT NULL schema variable \"var1\"\n> DETAIL: The schema variable was not initialized yet.\n> postgres=# LET var1=current_date;\n> ERROR: schema variable \"var1\" is declared IMMUTABLE\n>\n> It should probably be better to not allow NOT NULL when IMMUTABLE is used\n> because the variable can not be used at all. Also probably IMMUTABLE\n> without a DEFAULT value should also be restricted as it makes no sens. If\n> the user wants the variable to be NULL he must use DEFAULT NULL. This is\n> just a though, the above error messages are explicit and the user can\n> understand what wrong declaration he have done.\n>\n\nI thought about this case, and I have one scenario, where this behaviour\ncan be useful. When the variable is declared as IMMUTABLE NOT NULL without\nnot null default, then any access to the content of the variable has to\nfail. I think it can be used for detection, where and when the variable is\nfirst used. So this behavior is allowed just because I think, so this\nfeature can be interesting for debugging. If this idea is too strange, I\nhave no problem to disable this case.\n\nRegards\n\nPavel\n\n\n>\n> Except that I think this patch is ready for committers, so if there is no\n> other opinion in favor of restricting the use of IMMUTABLE with NOT NULL\n> and DEFAULT I will change the status to ready for committers.\n>\n> --\n> Gilles Daroldhttp://www.darold.net/\n>\n>\n\nso 28. 8. 2021 v 11:57 odesílatel Gilles Darold <gilles@darold.net> napsal:\n\nHi,\n\n\nReview resume:\n\n\n\n\nThis patch implements Schema Variables\n that are database objects that can hold a single or composite\n value following the data type used at variable declaration. Schema\n variables, like relations, exist within a schema and their access\n is controlled via GRANT and REVOKE commands. The schema variable\n can be created by the CREATE VARIABLE command, altered using ALTER\n VARIABLE and removed using DROP VARIABLE.\n The value of a schema variable is local to the current\n session. Retrieving a variable's value returns either a NULL or\n a default value, unless its value is set to something else in\n the current session with a LET command. The content of a\n variable is not transactional. This is the same as in regular\n variables in PL languages. \n Schema variables are retrieved by the SELECT SQL command.\n Their value is set with the LET SQL command. While schema\n variables share properties with tables, their value cannot be\n updated with an UPDATE command.\n \n\n\n\nThe\n patch apply with the patch command without problem and\n compilation reports no warning or errors. Regression tests\n pass successfully using make check or make installcheck\n\nIt\n also includes all documentation and regression tests.\n\n\n\nPerformances\n are near the set of plpgsql variable settings which is\n impressive:\n\n\ndo\n $$\n declare var1 int ; i int;\n begin\n   for i in 1..1000000\n   loop\n     var1 := i;\n   end loop;\n end;\n $$;\n DO\n Time: 71,515 ms\n\n\n\nCREATE\n VARIABLE var1 AS integer;\n\ndo\n $$\n declare i int ;\n begin\n   for i in 1..1000000\n   loop\n     let var1 = i;\n   end loop;\n end;\n $$;\n DO\n Time: 94,658 ms\n\n\nThere\n is just one thing that puzzles me.\n We can use :\n\n\n   \n CREATE VARIABLE var1 AS date NOT NULL;\n   \n postgres=# SELECT var1;\n     ERROR:  null value is not allowed for NOT NULL schema\n variable \"var1\"\n\n\n\n which I understand and is the right behavior. But if we use:\n\n\n\n   \n CREATE IMMUTABLE VARIABLE var1 AS date NOT NULL;\n   \n postgres=# SELECT var1;\n   \n ERROR:  null value is not allowed for NOT NULL schema\n variable \"var1\"\n     DETAIL:  The schema variable was not initialized yet.\n     postgres=# LET var1=current_date;\n     ERROR:  schema variable \"var1\" is declared IMMUTABLE\n\n\n It should probably be better to not allow NOT NULL when IMMUTABLE\n is used because the variable can not be used at all.  Also\n probably IMMUTABLE without a DEFAULT value should also be\n restricted as it makes no sens. If the user wants the variable to\n be NULL he must use DEFAULT NULL. This is just a though, the above\n error messages are explicit and the user can understand what wrong\n declaration he have done.I thought about this case, and I have one scenario, where this behaviour can be useful. When the variable is declared as IMMUTABLE NOT NULL without not null default, then any access to the content of the variable has to fail. I think it can be used for detection, where and when the variable is first used. So this behavior is allowed just because I think, so this feature can be interesting for debugging. If this idea is too strange, I have no problem to disable this case.RegardsPavel \n\n\nExcept that I\n think this patch is ready for committers, so if there is no\n other opinion in favor of restricting the use of IMMUTABLE\n with NOT NULL and DEFAULT I will change the status to ready\n for committers.\n\n\n-- \nGilles Darold\nhttp://www.darold.net/", "msg_date": "Sun, 29 Aug 2021 22:46:47 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nso 28. 8. 2021 v 11:57 odesílatel Gilles Darold <gilles@darold.net> napsal:\n\n> Hi,\n>\n> Review resume:\n>\n>\n> This patch implements Schema Variables that are database objects that can\n> hold a single or composite value following the data type used at variable\n> declaration. Schema variables, like relations, exist within a schema and\n> their access is controlled via GRANT and REVOKE commands. The schema\n> variable can be created by the CREATE VARIABLE command, altered using ALTER\n> VARIABLE and removed using DROP VARIABLE.\n>\n> The value of a schema variable is local to the current session. Retrieving\n> a variable's value returns either a NULL or a default value, unless its\n> value is set to something else in the current session with a LET command.\n> The content of a variable is not transactional. This is the same as in\n> regular variables in PL languages.\n>\n> Schema variables are retrieved by the SELECT SQL command. Their value is\n> set with the LET SQL command. While schema variables share properties with\n> tables, their value cannot be updated with an UPDATE command.\n>\n> The patch apply with the patch command without problem and compilation\n> reports no warning or errors. Regression tests pass successfully using make\n> check or make installcheck\n> It also includes all documentation and regression tests.\n>\n> Performances are near the set of plpgsql variable settings which is\n> impressive:\n>\n> do $$\n> declare var1 int ; i int;\n> begin\n> for i in 1..1000000\n> loop\n> var1 := i;\n> end loop;\n> end;\n> $$;\n> DO\n> Time: 71,515 ms\n>\n> CREATE VARIABLE var1 AS integer;\n> do $$\n> declare i int ;\n> begin\n> for i in 1..1000000\n> loop\n> let var1 = i;\n> end loop;\n> end;\n> $$;\n> DO\n> Time: 94,658 ms\n>\n> There is just one thing that puzzles me. We can use :\n>\n> CREATE VARIABLE var1 AS date NOT NULL;\n> postgres=# SELECT var1;\n> ERROR: null value is not allowed for NOT NULL schema variable \"var1\"\n>\n> which I understand and is the right behavior. But if we use:\n>\n> CREATE IMMUTABLE VARIABLE var1 AS date NOT NULL;\n> postgres=# SELECT var1;\n> ERROR: null value is not allowed for NOT NULL schema variable \"var1\"\n> DETAIL: The schema variable was not initialized yet.\n> postgres=# LET var1=current_date;\n> ERROR: schema variable \"var1\" is declared IMMUTABLE\n>\n> It should probably be better to not allow NOT NULL when IMMUTABLE is used\n> because the variable can not be used at all. Also probably IMMUTABLE\n> without a DEFAULT value should also be restricted as it makes no sens. If\n> the user wants the variable to be NULL he must use DEFAULT NULL. This is\n> just a though, the above error messages are explicit and the user can\n> understand what wrong declaration he have done.\n>\n\nI wrote a check that disables this case. Please, see the attached patch. I\nagree, so this case is confusing, and it is better to disable it.\n\nRegards\n\nPavel\n\n\n> Except that I think this patch is ready for committers, so if there is no\n> other opinion in favor of restricting the use of IMMUTABLE with NOT NULL\n> and DEFAULT I will change the status to ready for committers.\n>\n> --\n> Gilles Daroldhttp://www.darold.net/\n>\n>", "msg_date": "Wed, 8 Sep 2021 14:41:27 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Le 08/09/2021 à 13:41, Pavel Stehule a écrit :\n> Hi\n>\n> so 28. 8. 2021 v 11:57 odesílatel Gilles Darold <gilles@darold.net \n> <mailto:gilles@darold.net>> napsal:\n>\n> Hi,\n>\n> Review resume:\n>\n>\n> This patch implements Schema Variables that are database objects\n> that can hold a single or composite value following the data type\n> used at variable declaration. Schema variables, like relations,\n> exist within a schema and their access is controlled via GRANT and\n> REVOKE commands. The schema variable can be created by the CREATE\n> VARIABLE command, altered using ALTER VARIABLE and removed using\n> DROP VARIABLE.\n>\n> The value of a schema variable is local to the current session.\n> Retrieving a variable's value returns either a NULL or a default\n> value, unless its value is set to something else in the current\n> session with a LET command. The content of a variable is not\n> transactional. This is the same as in regular variables in PL\n> languages.\n>\n> Schema variables are retrieved by the SELECT SQL command. Their\n> value is set with the LET SQL command. While schema variables\n> share properties with tables, their value cannot be updated with\n> an UPDATE command.\n>\n>\n> The patch apply with the patch command without problem and\n> compilation reports no warning or errors. Regression tests pass\n> successfully using make check or make installcheck\n> It also includes all documentation and regression tests.\n>\n> Performances are near the set of plpgsql variable settings which\n> is impressive:\n>\n> do $$\n> declare var1 int ; i int;\n> begin\n>   for i in 1..1000000\n>   loop\n>     var1 := i;\n>   end loop;\n> end;\n> $$;\n> DO\n> Time: 71,515 ms\n>\n> CREATE VARIABLE var1 AS integer;\n> do $$\n> declare i int ;\n> begin\n>   for i in 1..1000000\n>   loop\n>     let var1 = i;\n>   end loop;\n> end;\n> $$;\n> DO\n> Time: 94,658 ms\n>\n> There is just one thing that puzzles me.We can use :\n>\n>     CREATE VARIABLE var1 AS date NOT NULL;\n>     postgres=# SELECT var1;\n>     ERROR:  null value is not allowed for NOT NULL schema variable\n> \"var1\"\n>\n> which I understand and is the right behavior. But if we use:\n>\n>     CREATE IMMUTABLE VARIABLE var1 AS date NOT NULL;\n>     postgres=# SELECT var1;\n>     ERROR:  null value is not allowed for NOT NULL schema variable\n> \"var1\"\n>     DETAIL:  The schema variable was not initialized yet.\n>     postgres=# LET var1=current_date;\n>     ERROR:  schema variable \"var1\" is declared IMMUTABLE\n>\n> It should probably be better to not allow NOT NULL when IMMUTABLE\n> is used because the variable can not be used at all.  Also\n> probably IMMUTABLE without a DEFAULT value should also be\n> restricted as it makes no sens. If the user wants the variable to\n> be NULL he must use DEFAULT NULL. This is just a though, the above\n> error messages are explicit and the user can understand what wrong\n> declaration he have done.\n>\n>\n> I wrote a check that disables this case.  Please, see the attached \n> patch. I agree, so this case is confusing, and it is better to disable it.\n>\n\nGreat, I also think that this is better to not confuse the user.\n\n     postgres=# CREATE IMMUTABLE VARIABLE var1 AS date NOT NULL;\n     ERROR:  IMMUTABLE NOT NULL variable requires default expression\n\nWorking as expected. I have moved the patch to \"Ready for committers\". \nThanks for this feature.\n\n\n-- \nGilles Darold\nhttp://www.darold.net/\n\n\n\n\n\n\n\nLe 08/09/2021 à 13:41, Pavel Stehule a\n écrit :\n\n\n\n\nHi\n\n\n\nso 28. 8. 2021 v 11:57\n odesílatel Gilles Darold <gilles@darold.net>\n napsal:\n\n\n\nHi,\n\n\nReview resume:\n\n\n\n\nThis patch implements Schema Variables that are\n database objects that can hold a single or composite\n value following the data type used at variable\n declaration. Schema variables, like relations, exist\n within a schema and their access is controlled via GRANT\n and REVOKE commands. The schema variable can be created\n by the CREATE VARIABLE command, altered using ALTER\n VARIABLE and removed using DROP VARIABLE.\n The value of a schema variable is local to the\n current session. Retrieving a variable's value returns\n either a NULL or a default value, unless its value is\n set to something else in the current session with a\n LET command. The content of a variable is not\n transactional. This is the same as in regular\n variables in PL languages. \n Schema variables are retrieved by the SELECT SQL\n command. Their value is set with the LET SQL command.\n While schema variables share properties with tables,\n their value cannot be updated with an UPDATE command.\n \n\n\n\nThe patch apply with the\n patch command without problem and compilation\n reports no warning or errors. Regression tests\n pass successfully using make check or make\n installcheck\n\nIt also includes all\n documentation and regression tests.\n\n\n\nPerformances are near the\n set of plpgsql variable settings which is\n impressive:\n\n\ndo $$\n declare var1 int ; i int;\n begin\n   for i in 1..1000000\n   loop\n     var1 := i;\n   end loop;\n end;\n $$;\n DO\n Time: 71,515 ms\n\n\n\nCREATE VARIABLE var1 AS\n integer;\n\ndo $$\n declare i int ;\n begin\n   for i in 1..1000000\n   loop\n     let var1 = i;\n   end loop;\n end;\n $$;\n DO\n Time: 94,658 ms\n\n\nThere is just one thing\n that puzzles me. We can use :\n\n\n    CREATE VARIABLE var1\n AS date NOT NULL;\n    postgres=# SELECT\n var1;\n     ERROR:  null value is not allowed for NOT NULL\n schema variable \"var1\"\n\n\n which I understand and\n is the right behavior. But if we use:\n\n\n\n    CREATE IMMUTABLE\n VARIABLE var1 AS date NOT NULL;\n    postgres=# SELECT\n var1;\n    ERROR:  null value\n is not allowed for NOT NULL schema variable\n \"var1\"\n     DETAIL:  The schema variable was not\n initialized yet.\n     postgres=# LET var1=current_date;\n     ERROR:  schema variable \"var1\" is declared\n IMMUTABLE\n\n\n It should probably be better to not allow NOT NULL when\n IMMUTABLE is used because the variable can not be used\n at all.  Also probably IMMUTABLE without a DEFAULT value\n should also be restricted as it makes no sens. If the\n user wants the variable to be NULL he must use DEFAULT\n NULL. This is just a though, the above error messages\n are explicit and the user can understand what wrong\n declaration he have done.\n\n\n\n\nI wrote a check that disables this case.  Please, see the\n attached patch. I agree, so this case is confusing, and it\n is better to disable it.\n\n\n\n\n\n\n\nGreat, I also think that this is better to not confuse the user.\n\n    postgres=# CREATE IMMUTABLE VARIABLE var1 AS date NOT NULL;\n     ERROR:  IMMUTABLE NOT NULL variable requires default\n expression\n\n\nWorking as expected. I have moved the patch to \"Ready for\n committers\". Thanks for this feature.\n\n\n\n-- \nGilles Darold\nhttp://www.darold.net/", "msg_date": "Wed, 8 Sep 2021 17:59:16 +0100", "msg_from": "Gilles Darold <gilles@darold.net>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nGreat, I also think that this is better to not confuse the user.\n>\n> postgres=# CREATE IMMUTABLE VARIABLE var1 AS date NOT NULL;\n> ERROR: IMMUTABLE NOT NULL variable requires default expression\n>\n> Working as expected. I have moved the patch to \"Ready for committers\".\n> Thanks for this feature.\n>\n\nThank you very much\n\nPavel\n\n\n> --\n> Gilles Daroldhttp://www.darold.net/\n>\n>\n\nHiGreat, I also think that this is better to not confuse the user.\n\n    postgres=# CREATE IMMUTABLE VARIABLE var1 AS date NOT NULL;\n     ERROR:  IMMUTABLE NOT NULL variable requires default\n expression\n\n\nWorking as expected. I have moved the patch to \"Ready for\n committers\". Thanks for this feature.Thank you very muchPavel\n\n\n\n-- \nGilles Darold\nhttp://www.darold.net/", "msg_date": "Wed, 8 Sep 2021 21:23:41 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nfresh rebase\n\nRegards\n\nPavel", "msg_date": "Thu, 9 Sep 2021 06:59:31 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": " > [schema-variables-20210909.patch]\n\nHi Pavel,\n\nThe patch applies and compiles fine but 'make check' for the \nassert-enabled fails on 131 out of 210 tests.\n\n(while compiling HEAD checks run without errors for both assert-disabled \nand assert-enabled)\n\n\nErik Rijkers\n\n\ntest tablespace ... ok 303 ms\nparallel group (20 tests): oid char pg_lsn int2 varchar txid int4 \nregproc uuid float4 text name money boolean bit float8 int8 enum numeric \nrangetypes\n boolean ... ok 112 ms\n char ... ok 57 ms\n name ... ok 106 ms\n varchar ... ok 74 ms\n text ... ok 106 ms\n int2 ... ok 73 ms\n int4 ... ok 92 ms\n int8 ... ok 130 ms\n oid ... ok 55 ms\n float4 ... ok 102 ms\n float8 ... ok 126 ms\n bit ... ok 124 ms\n numeric ... ok 362 ms\n txid ... ok 87 ms\n uuid ... ok 100 ms\n enum ... ok 142 ms\n money ... ok 109 ms\n rangetypes ... ok 433 ms\n pg_lsn ... ok 64 ms\n regproc ... ok 91 ms\nparallel group (20 tests): lseg path circle time macaddr \ncreate_function_0 timetz line macaddr8 numerology point interval inet \ndate strings polygon box multirangetypes timestamp timestamptz\n strings ... ok 166 ms\n numerology ... ok 89 ms\n point ... ok 96 ms\n lseg ... ok 35 ms\n line ... ok 70 ms\n box ... ok 255 ms\n path ... ok 50 ms\n polygon ... ok 237 ms\n circle ... ok 53 ms\n date ... ok 127 ms\n time ... ok 60 ms\n timetz ... ok 67 ms\n timestamp ... ok 379 ms\n timestamptz ... ok 413 ms\n interval ... ok 97 ms\n inet ... ok 118 ms\n macaddr ... ok 60 ms\n macaddr8 ... ok 80 ms\n multirangetypes ... ok 307 ms\n create_function_0 ... ok 63 ms\nparallel group (12 tests): comments unicode misc_sanity tstypes xid \nexpressions horology geometry mvcc type_sanity regex opr_sanity\n geometry ... ok 140 ms\n horology ... ok 120 ms\n tstypes ... ok 53 ms\n regex ... ok 335 ms\n type_sanity ... ok 155 ms\n opr_sanity ... ok 355 ms\n misc_sanity ... ok 43 ms\n comments ... ok 20 ms\n expressions ... ok 100 ms\n unicode ... ok 25 ms\n xid ... ok 56 ms\n mvcc ... ok 146 ms\ntest create_function_1 ... ok 10 ms\ntest create_type ... ok 30 ms\ntest create_table ... ok 333 ms\ntest create_function_2 ... ok 11 ms\nparallel group (5 tests): copydml copyselect insert_conflict insert copy\n copy ... ok 336 ms\n copyselect ... ok 34 ms\n copydml ... ok 28 ms\n insert ... ok 291 ms\n insert_conflict ... FAILED (test process exited with \nexit code 2) 239 ms\nparallel group (3 tests): create_operator create_procedure create_misc\n create_misc ... ok 131 ms\n create_operator ... ok 29 ms\n create_procedure ... ok 52 ms\nparallel group (5 tests): create_view create_index_spgist \nindex_including create_index index_including_gist\n create_index ... FAILED (test process exited with \nexit code 2) 3801 ms\n create_index_spgist ... ok 523 ms\n create_view ... FAILED (test process exited with \nexit code 2) 339 ms\n index_including ... FAILED (test process exited with \nexit code 2) 3801 ms\n index_including_gist ... FAILED (test process exited with \nexit code 2) 3801 ms\nparallel group (16 tests): create_aggregate create_cast typed_table \ndrop_if_exists roleattributes create_am hash_func updatable_views errors \ninfinite_recurse create_function_3 triggers constraints select inherit \nvacuum\n create_aggregate ... FAILED (test process exited with \nexit code 2) 164 ms\n create_function_3 ... FAILED (test process exited with \nexit code 2) 164 ms\n create_cast ... FAILED (test process exited with \nexit code 2) 164 ms\n constraints ... FAILED (test process exited with \nexit code 2) 181 ms\n triggers ... FAILED (test process exited with \nexit code 2) 181 ms\n select ... FAILED (test process exited with \nexit code 2) 181 ms\n inherit ... FAILED (test process exited with \nexit code 2) 181 ms\n typed_table ... FAILED (test process exited with \nexit code 2) 163 ms\n vacuum ... FAILED (test process exited with \nexit code 2) 180 ms\n drop_if_exists ... FAILED (test process exited with \nexit code 2) 163 ms\n updatable_views ... FAILED (test process exited with \nexit code 2) 163 ms\n roleattributes ... FAILED (test process exited with \nexit code 2) 163 ms\n create_am ... FAILED (test process exited with \nexit code 2) 163 ms\n hash_func ... FAILED (test process exited with \nexit code 2) 162 ms\n errors ... FAILED (test process exited with \nexit code 2) 162 ms\n infinite_recurse ... FAILED (test process exited with \nexit code 2) 162 ms\ntest sanity_check ... FAILED (test process exited with \nexit code 2) 26 ms\nparallel group (20 tests): select_into subselect select_distinct arrays \njoin namespace hash_index select_having portals transactions aggregates \nrandom update delete union btree_index select_implicit \nselect_distinct_on prepared_xacts case\n select_into ... FAILED (test process exited with \nexit code 2) 20 ms\n select_distinct ... FAILED (test process exited with \nexit code 2) 21 ms\n select_distinct_on ... FAILED (test process exited with \nexit code 2) 26 ms\n select_implicit ... FAILED (test process exited with \nexit code 2) 26 ms\n select_having ... FAILED (test process exited with \nexit code 2) 23 ms\n subselect ... FAILED (test process exited with \nexit code 2) 20 ms\n union ... FAILED (test process exited with \nexit code 2) 25 ms\n case ... FAILED (test process exited with \nexit code 2) 27 ms\n join ... FAILED (test process exited with \nexit code 2) 22 ms\n aggregates ... FAILED (test process exited with \nexit code 2) 24 ms\n transactions ... FAILED (test process exited with \nexit code 2) 24 ms\n random ... failed (ignored) (test process \nexited with exit code 2) 24 ms\n portals ... FAILED (test process exited with \nexit code 2) 23 ms\n arrays ... FAILED (test process exited with \nexit code 2) 20 ms\n btree_index ... FAILED (test process exited with \nexit code 2) 25 ms\n hash_index ... FAILED (test process exited with \nexit code 2) 22 ms\n update ... FAILED (test process exited with \nexit code 2) 23 ms\n delete ... FAILED (test process exited with \nexit code 2) 24 ms\n namespace ... FAILED (test process exited with \nexit code 2) 21 ms\n prepared_xacts ... FAILED (test process exited with \nexit code 2) 25 ms\nparallel group (20 tests): gist brin identity generated password \ntablesample lock matview replica_identity rowsecurity security_label \nobject_address drop_operator groupingsets join_hash privileges collate \ninit_privs spgist gin\n brin ... FAILED (test process exited with \nexit code 2) 15 ms\n gin ... FAILED (test process exited with \nexit code 2) 22 ms\n gist ... FAILED (test process exited with \nexit code 2) 13 ms\n spgist ... FAILED (test process exited with \nexit code 2) 22 ms\n privileges ... FAILED (test process exited with \nexit code 2) 19 ms\n init_privs ... FAILED (test process exited with \nexit code 2) 21 ms\n security_label ... FAILED (test process exited with \nexit code 2) 17 ms\n collate ... FAILED (test process exited with \nexit code 2) 20 ms\n matview ... FAILED (test process exited with \nexit code 2) 17 ms\n lock ... FAILED (test process exited with \nexit code 2) 15 ms\n replica_identity ... FAILED (test process exited with \nexit code 2) 17 ms\n rowsecurity ... FAILED (test process exited with \nexit code 2) 17 ms\n object_address ... FAILED (test process exited with \nexit code 2) 17 ms\n tablesample ... FAILED (test process exited with \nexit code 2) 15 ms\n groupingsets ... FAILED (test process exited with \nexit code 2) 17 ms\n drop_operator ... FAILED (test process exited with \nexit code 2) 17 ms\n password ... FAILED (test process exited with \nexit code 2) 14 ms\n identity ... FAILED (test process exited with \nexit code 2) 13 ms\n generated ... FAILED (test process exited with \nexit code 2) 14 ms\n join_hash ... FAILED (test process exited with \nexit code 2) 18 ms\nparallel group (2 tests): brin_multi brin_bloom\n brin_bloom ... FAILED (test process exited with \nexit code 2) 4 ms\n brin_multi ... FAILED (test process exited with \nexit code 2) 4 ms\nparallel group (14 tests): async create_table_like collate.icu.utf8 \nmisc sysviews alter_operator tidscan tidrangescan alter_generic tsrf \nincremental_sort misc_functions tid dbsize\n create_table_like ... FAILED (test process exited with \nexit code 2) 10 ms\n alter_generic ... FAILED (test process exited with \nexit code 2) 15 ms\n alter_operator ... FAILED (test process exited with \nexit code 2) 13 ms\n misc ... FAILED (test process exited with \nexit code 2) 11 ms\n async ... FAILED (test process exited with \nexit code 2) 9 ms\n dbsize ... FAILED (test process exited with \nexit code 2) 17 ms\n misc_functions ... FAILED (test process exited with \nexit code 2) 14 ms\n sysviews ... FAILED (test process exited with \nexit code 2) 12 ms\n tsrf ... FAILED (test process exited with \nexit code 2) 14 ms\n tid ... FAILED (test process exited with \nexit code 2) 16 ms\n tidscan ... FAILED (test process exited with \nexit code 2) 13 ms\n tidrangescan ... FAILED (test process exited with \nexit code 2) 13 ms\n collate.icu.utf8 ... FAILED (test process exited with \nexit code 2) 9 ms\n incremental_sort ... FAILED (test process exited with \nexit code 2) 13 ms\nparallel group (6 tests): amutils psql_crosstab collate.linux.utf8 psql \nstats_ext rules\n rules ... FAILED (test process exited with \nexit code 2) 8 ms\n psql ... FAILED (test process exited with \nexit code 2) 7 ms\n psql_crosstab ... FAILED (test process exited with \nexit code 2) 7 ms\n amutils ... FAILED (test process exited with \nexit code 2) 7 ms\n stats_ext ... FAILED (test process exited with \nexit code 2) 8 ms\n collate.linux.utf8 ... FAILED (test process exited with \nexit code 2) 7 ms\ntest select_parallel ... FAILED (test process exited with \nexit code 2) 3 ms\ntest write_parallel ... FAILED (test process exited with \nexit code 2) 3 ms\nparallel group (2 tests): publication subscription\n publication ... FAILED (test process exited with \nexit code 2) 4 ms\n subscription ... FAILED (test process exited with \nexit code 2) 4 ms\nparallel group (17 tests): select_views foreign_key xmlmap window \nfunctional_deps tsearch cluster combocid bitmapops tsdicts equivclass \nguc indirect_toast advisory_lock foreign_data dependency portals_p2\n select_views ... FAILED (test process exited with \nexit code 2) 11 ms\n portals_p2 ... FAILED (test process exited with \nexit code 2) 20 ms\n foreign_key ... FAILED (test process exited with \nexit code 2) 11 ms\n cluster ... FAILED (test process exited with \nexit code 2) 15 ms\n dependency ... FAILED (test process exited with \nexit code 2) 19 ms\n guc ... FAILED (test process exited with \nexit code 2) 16 ms\n bitmapops ... FAILED (test process exited with \nexit code 2) 16 ms\n combocid ... FAILED (test process exited with \nexit code 2) 15 ms\n tsearch ... FAILED (test process exited with \nexit code 2) 14 ms\n tsdicts ... FAILED (test process exited with \nexit code 2) 16 ms\n foreign_data ... FAILED (test process exited with \nexit code 2) 17 ms\n window ... FAILED (test process exited with \nexit code 2) 12 ms\n xmlmap ... FAILED (test process exited with \nexit code 2) 12 ms\n functional_deps ... FAILED (test process exited with \nexit code 2) 13 ms\n advisory_lock ... FAILED (test process exited with \nexit code 2) 16 ms\n indirect_toast ... FAILED (test process exited with \nexit code 2) 16 ms\n equivclass ... FAILED (test process exited with \nexit code 2) 15 ms\nparallel group (6 tests): json json_encoding jsonb jsonpath_encoding \njsonb_jsonpath jsonpath\n json ... FAILED (test process exited with \nexit code 2) 4 ms\n jsonb ... FAILED (test process exited with \nexit code 2) 6 ms\n json_encoding ... FAILED (test process exited with \nexit code 2) 5 ms\n jsonpath ... FAILED (test process exited with \nexit code 2) 10 ms\n jsonpath_encoding ... FAILED (test process exited with \nexit code 2) 6 ms\n jsonb_jsonpath ... FAILED (test process exited with \nexit code 2) 7 ms\nparallel group (19 tests): plpgsql limit rowtypes sequence largeobject \nreturning domain polymorphism plancache prepare alter_table truncate \ntemp rangefuncs with copy2 conversion schema_variables xml\n plancache ... FAILED (test process exited with \nexit code 2) 16 ms\n limit ... FAILED (test process exited with \nexit code 2) 10 ms\n plpgsql ... FAILED (test process exited with \nexit code 2) 7 ms\n copy2 ... FAILED (test process exited with \nexit code 2) 25 ms\n temp ... FAILED (test process exited with \nexit code 2) 21 ms\n domain ... FAILED (test process exited with \nexit code 2) 13 ms\n rangefuncs ... FAILED (test process exited with \nexit code 2) 22 ms\n prepare ... FAILED (test process exited with \nexit code 2) 19 ms\n conversion ... FAILED (test process exited with \nexit code 2) 24 ms\n truncate ... FAILED (test process exited with \nexit code 2) 19 ms\n alter_table ... FAILED (test process exited with \nexit code 2) 18 ms\n sequence ... FAILED (test process exited with \nexit code 2) 11 ms\n polymorphism ... FAILED (test process exited with \nexit code 2) 13 ms\n rowtypes ... FAILED (test process exited with \nexit code 2) 10 ms\n returning ... FAILED (test process exited with \nexit code 2) 11 ms\n largeobject ... FAILED (test process exited with \nexit code 2) 10 ms\n with ... FAILED (test process exited with \nexit code 2) 22 ms\n xml ... FAILED (test process exited with \nexit code 2) 23 ms\n schema_variables ... FAILED (test process exited with \nexit code 2) 23 ms\nparallel group (11 tests): explain hash_part partition_info reloptions \nmemoize compression partition_aggregate partition_join indexing \npartition_prune tuplesort\n partition_join ... FAILED 902 ms\n partition_prune ... ok 1006 ms\n reloptions ... ok 106 ms\n hash_part ... ok 99 ms\n indexing ... ok 929 ms\n partition_aggregate ... ok 791 ms\n partition_info ... ok 104 ms\n tuplesort ... ok 1099 ms\n explain ... ok 90 ms\n compression ... ok 214 ms\n memoize ... ok 109 ms\nparallel group (2 tests): event_trigger oidjoins\n event_trigger ... ok 107 ms\n oidjoins ... ok 157 ms\ntest fast_default ... ok 138 ms\ntest stats ... ok 617 ms\n\n\n\nOn 9/9/21 6:59 AM, Pavel Stehule wrote:\n> Hi\n> \n> fresh rebase\n> \n> Regards\n> \n> Pavel\n> \n\n\n", "msg_date": "Thu, 9 Sep 2021 12:21:19 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nčt 9. 9. 2021 v 12:21 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n\n> > [schema-variables-20210909.patch]\n>\n> Hi Pavel,\n>\n> The patch applies and compiles fine but 'make check' for the\n> assert-enabled fails on 131 out of 210 tests.\n>\n\nThis morning I tested it. I'll recheck it.\n\nPavel\n\n\n> (while compiling HEAD checks run without errors for both assert-disabled\n> and assert-enabled)\n>\n>\n> Erik Rijkers\n>\n>\n> test tablespace ... ok 303 ms\n> parallel group (20 tests): oid char pg_lsn int2 varchar txid int4\n> regproc uuid float4 text name money boolean bit float8 int8 enum numeric\n> rangetypes\n> boolean ... ok 112 ms\n> char ... ok 57 ms\n> name ... ok 106 ms\n> varchar ... ok 74 ms\n> text ... ok 106 ms\n> int2 ... ok 73 ms\n> int4 ... ok 92 ms\n> int8 ... ok 130 ms\n> oid ... ok 55 ms\n> float4 ... ok 102 ms\n> float8 ... ok 126 ms\n> bit ... ok 124 ms\n> numeric ... ok 362 ms\n> txid ... ok 87 ms\n> uuid ... ok 100 ms\n> enum ... ok 142 ms\n> money ... ok 109 ms\n> rangetypes ... ok 433 ms\n> pg_lsn ... ok 64 ms\n> regproc ... ok 91 ms\n> parallel group (20 tests): lseg path circle time macaddr\n> create_function_0 timetz line macaddr8 numerology point interval inet\n> date strings polygon box multirangetypes timestamp timestamptz\n> strings ... ok 166 ms\n> numerology ... ok 89 ms\n> point ... ok 96 ms\n> lseg ... ok 35 ms\n> line ... ok 70 ms\n> box ... ok 255 ms\n> path ... ok 50 ms\n> polygon ... ok 237 ms\n> circle ... ok 53 ms\n> date ... ok 127 ms\n> time ... ok 60 ms\n> timetz ... ok 67 ms\n> timestamp ... ok 379 ms\n> timestamptz ... ok 413 ms\n> interval ... ok 97 ms\n> inet ... ok 118 ms\n> macaddr ... ok 60 ms\n> macaddr8 ... ok 80 ms\n> multirangetypes ... ok 307 ms\n> create_function_0 ... ok 63 ms\n> parallel group (12 tests): comments unicode misc_sanity tstypes xid\n> expressions horology geometry mvcc type_sanity regex opr_sanity\n> geometry ... ok 140 ms\n> horology ... ok 120 ms\n> tstypes ... ok 53 ms\n> regex ... ok 335 ms\n> type_sanity ... ok 155 ms\n> opr_sanity ... ok 355 ms\n> misc_sanity ... ok 43 ms\n> comments ... ok 20 ms\n> expressions ... ok 100 ms\n> unicode ... ok 25 ms\n> xid ... ok 56 ms\n> mvcc ... ok 146 ms\n> test create_function_1 ... ok 10 ms\n> test create_type ... ok 30 ms\n> test create_table ... ok 333 ms\n> test create_function_2 ... ok 11 ms\n> parallel group (5 tests): copydml copyselect insert_conflict insert copy\n> copy ... ok 336 ms\n> copyselect ... ok 34 ms\n> copydml ... ok 28 ms\n> insert ... ok 291 ms\n> insert_conflict ... FAILED (test process exited with\n> exit code 2) 239 ms\n> parallel group (3 tests): create_operator create_procedure create_misc\n> create_misc ... ok 131 ms\n> create_operator ... ok 29 ms\n> create_procedure ... ok 52 ms\n> parallel group (5 tests): create_view create_index_spgist\n> index_including create_index index_including_gist\n> create_index ... FAILED (test process exited with\n> exit code 2) 3801 ms\n> create_index_spgist ... ok 523 ms\n> create_view ... FAILED (test process exited with\n> exit code 2) 339 ms\n> index_including ... FAILED (test process exited with\n> exit code 2) 3801 ms\n> index_including_gist ... FAILED (test process exited with\n> exit code 2) 3801 ms\n> parallel group (16 tests): create_aggregate create_cast typed_table\n> drop_if_exists roleattributes create_am hash_func updatable_views errors\n> infinite_recurse create_function_3 triggers constraints select inherit\n> vacuum\n> create_aggregate ... FAILED (test process exited with\n> exit code 2) 164 ms\n> create_function_3 ... FAILED (test process exited with\n> exit code 2) 164 ms\n> create_cast ... FAILED (test process exited with\n> exit code 2) 164 ms\n> constraints ... FAILED (test process exited with\n> exit code 2) 181 ms\n> triggers ... FAILED (test process exited with\n> exit code 2) 181 ms\n> select ... FAILED (test process exited with\n> exit code 2) 181 ms\n> inherit ... FAILED (test process exited with\n> exit code 2) 181 ms\n> typed_table ... FAILED (test process exited with\n> exit code 2) 163 ms\n> vacuum ... FAILED (test process exited with\n> exit code 2) 180 ms\n> drop_if_exists ... FAILED (test process exited with\n> exit code 2) 163 ms\n> updatable_views ... FAILED (test process exited with\n> exit code 2) 163 ms\n> roleattributes ... FAILED (test process exited with\n> exit code 2) 163 ms\n> create_am ... FAILED (test process exited with\n> exit code 2) 163 ms\n> hash_func ... FAILED (test process exited with\n> exit code 2) 162 ms\n> errors ... FAILED (test process exited with\n> exit code 2) 162 ms\n> infinite_recurse ... FAILED (test process exited with\n> exit code 2) 162 ms\n> test sanity_check ... FAILED (test process exited with\n> exit code 2) 26 ms\n> parallel group (20 tests): select_into subselect select_distinct arrays\n> join namespace hash_index select_having portals transactions aggregates\n> random update delete union btree_index select_implicit\n> select_distinct_on prepared_xacts case\n> select_into ... FAILED (test process exited with\n> exit code 2) 20 ms\n> select_distinct ... FAILED (test process exited with\n> exit code 2) 21 ms\n> select_distinct_on ... FAILED (test process exited with\n> exit code 2) 26 ms\n> select_implicit ... FAILED (test process exited with\n> exit code 2) 26 ms\n> select_having ... FAILED (test process exited with\n> exit code 2) 23 ms\n> subselect ... FAILED (test process exited with\n> exit code 2) 20 ms\n> union ... FAILED (test process exited with\n> exit code 2) 25 ms\n> case ... FAILED (test process exited with\n> exit code 2) 27 ms\n> join ... FAILED (test process exited with\n> exit code 2) 22 ms\n> aggregates ... FAILED (test process exited with\n> exit code 2) 24 ms\n> transactions ... FAILED (test process exited with\n> exit code 2) 24 ms\n> random ... failed (ignored) (test process\n> exited with exit code 2) 24 ms\n> portals ... FAILED (test process exited with\n> exit code 2) 23 ms\n> arrays ... FAILED (test process exited with\n> exit code 2) 20 ms\n> btree_index ... FAILED (test process exited with\n> exit code 2) 25 ms\n> hash_index ... FAILED (test process exited with\n> exit code 2) 22 ms\n> update ... FAILED (test process exited with\n> exit code 2) 23 ms\n> delete ... FAILED (test process exited with\n> exit code 2) 24 ms\n> namespace ... FAILED (test process exited with\n> exit code 2) 21 ms\n> prepared_xacts ... FAILED (test process exited with\n> exit code 2) 25 ms\n> parallel group (20 tests): gist brin identity generated password\n> tablesample lock matview replica_identity rowsecurity security_label\n> object_address drop_operator groupingsets join_hash privileges collate\n> init_privs spgist gin\n> brin ... FAILED (test process exited with\n> exit code 2) 15 ms\n> gin ... FAILED (test process exited with\n> exit code 2) 22 ms\n> gist ... FAILED (test process exited with\n> exit code 2) 13 ms\n> spgist ... FAILED (test process exited with\n> exit code 2) 22 ms\n> privileges ... FAILED (test process exited with\n> exit code 2) 19 ms\n> init_privs ... FAILED (test process exited with\n> exit code 2) 21 ms\n> security_label ... FAILED (test process exited with\n> exit code 2) 17 ms\n> collate ... FAILED (test process exited with\n> exit code 2) 20 ms\n> matview ... FAILED (test process exited with\n> exit code 2) 17 ms\n> lock ... FAILED (test process exited with\n> exit code 2) 15 ms\n> replica_identity ... FAILED (test process exited with\n> exit code 2) 17 ms\n> rowsecurity ... FAILED (test process exited with\n> exit code 2) 17 ms\n> object_address ... FAILED (test process exited with\n> exit code 2) 17 ms\n> tablesample ... FAILED (test process exited with\n> exit code 2) 15 ms\n> groupingsets ... FAILED (test process exited with\n> exit code 2) 17 ms\n> drop_operator ... FAILED (test process exited with\n> exit code 2) 17 ms\n> password ... FAILED (test process exited with\n> exit code 2) 14 ms\n> identity ... FAILED (test process exited with\n> exit code 2) 13 ms\n> generated ... FAILED (test process exited with\n> exit code 2) 14 ms\n> join_hash ... FAILED (test process exited with\n> exit code 2) 18 ms\n> parallel group (2 tests): brin_multi brin_bloom\n> brin_bloom ... FAILED (test process exited with\n> exit code 2) 4 ms\n> brin_multi ... FAILED (test process exited with\n> exit code 2) 4 ms\n> parallel group (14 tests): async create_table_like collate.icu.utf8\n> misc sysviews alter_operator tidscan tidrangescan alter_generic tsrf\n> incremental_sort misc_functions tid dbsize\n> create_table_like ... FAILED (test process exited with\n> exit code 2) 10 ms\n> alter_generic ... FAILED (test process exited with\n> exit code 2) 15 ms\n> alter_operator ... FAILED (test process exited with\n> exit code 2) 13 ms\n> misc ... FAILED (test process exited with\n> exit code 2) 11 ms\n> async ... FAILED (test process exited with\n> exit code 2) 9 ms\n> dbsize ... FAILED (test process exited with\n> exit code 2) 17 ms\n> misc_functions ... FAILED (test process exited with\n> exit code 2) 14 ms\n> sysviews ... FAILED (test process exited with\n> exit code 2) 12 ms\n> tsrf ... FAILED (test process exited with\n> exit code 2) 14 ms\n> tid ... FAILED (test process exited with\n> exit code 2) 16 ms\n> tidscan ... FAILED (test process exited with\n> exit code 2) 13 ms\n> tidrangescan ... FAILED (test process exited with\n> exit code 2) 13 ms\n> collate.icu.utf8 ... FAILED (test process exited with\n> exit code 2) 9 ms\n> incremental_sort ... FAILED (test process exited with\n> exit code 2) 13 ms\n> parallel group (6 tests): amutils psql_crosstab collate.linux.utf8 psql\n> stats_ext rules\n> rules ... FAILED (test process exited with\n> exit code 2) 8 ms\n> psql ... FAILED (test process exited with\n> exit code 2) 7 ms\n> psql_crosstab ... FAILED (test process exited with\n> exit code 2) 7 ms\n> amutils ... FAILED (test process exited with\n> exit code 2) 7 ms\n> stats_ext ... FAILED (test process exited with\n> exit code 2) 8 ms\n> collate.linux.utf8 ... FAILED (test process exited with\n> exit code 2) 7 ms\n> test select_parallel ... FAILED (test process exited with\n> exit code 2) 3 ms\n> test write_parallel ... FAILED (test process exited with\n> exit code 2) 3 ms\n> parallel group (2 tests): publication subscription\n> publication ... FAILED (test process exited with\n> exit code 2) 4 ms\n> subscription ... FAILED (test process exited with\n> exit code 2) 4 ms\n> parallel group (17 tests): select_views foreign_key xmlmap window\n> functional_deps tsearch cluster combocid bitmapops tsdicts equivclass\n> guc indirect_toast advisory_lock foreign_data dependency portals_p2\n> select_views ... FAILED (test process exited with\n> exit code 2) 11 ms\n> portals_p2 ... FAILED (test process exited with\n> exit code 2) 20 ms\n> foreign_key ... FAILED (test process exited with\n> exit code 2) 11 ms\n> cluster ... FAILED (test process exited with\n> exit code 2) 15 ms\n> dependency ... FAILED (test process exited with\n> exit code 2) 19 ms\n> guc ... FAILED (test process exited with\n> exit code 2) 16 ms\n> bitmapops ... FAILED (test process exited with\n> exit code 2) 16 ms\n> combocid ... FAILED (test process exited with\n> exit code 2) 15 ms\n> tsearch ... FAILED (test process exited with\n> exit code 2) 14 ms\n> tsdicts ... FAILED (test process exited with\n> exit code 2) 16 ms\n> foreign_data ... FAILED (test process exited with\n> exit code 2) 17 ms\n> window ... FAILED (test process exited with\n> exit code 2) 12 ms\n> xmlmap ... FAILED (test process exited with\n> exit code 2) 12 ms\n> functional_deps ... FAILED (test process exited with\n> exit code 2) 13 ms\n> advisory_lock ... FAILED (test process exited with\n> exit code 2) 16 ms\n> indirect_toast ... FAILED (test process exited with\n> exit code 2) 16 ms\n> equivclass ... FAILED (test process exited with\n> exit code 2) 15 ms\n> parallel group (6 tests): json json_encoding jsonb jsonpath_encoding\n> jsonb_jsonpath jsonpath\n> json ... FAILED (test process exited with\n> exit code 2) 4 ms\n> jsonb ... FAILED (test process exited with\n> exit code 2) 6 ms\n> json_encoding ... FAILED (test process exited with\n> exit code 2) 5 ms\n> jsonpath ... FAILED (test process exited with\n> exit code 2) 10 ms\n> jsonpath_encoding ... FAILED (test process exited with\n> exit code 2) 6 ms\n> jsonb_jsonpath ... FAILED (test process exited with\n> exit code 2) 7 ms\n> parallel group (19 tests): plpgsql limit rowtypes sequence largeobject\n> returning domain polymorphism plancache prepare alter_table truncate\n> temp rangefuncs with copy2 conversion schema_variables xml\n> plancache ... FAILED (test process exited with\n> exit code 2) 16 ms\n> limit ... FAILED (test process exited with\n> exit code 2) 10 ms\n> plpgsql ... FAILED (test process exited with\n> exit code 2) 7 ms\n> copy2 ... FAILED (test process exited with\n> exit code 2) 25 ms\n> temp ... FAILED (test process exited with\n> exit code 2) 21 ms\n> domain ... FAILED (test process exited with\n> exit code 2) 13 ms\n> rangefuncs ... FAILED (test process exited with\n> exit code 2) 22 ms\n> prepare ... FAILED (test process exited with\n> exit code 2) 19 ms\n> conversion ... FAILED (test process exited with\n> exit code 2) 24 ms\n> truncate ... FAILED (test process exited with\n> exit code 2) 19 ms\n> alter_table ... FAILED (test process exited with\n> exit code 2) 18 ms\n> sequence ... FAILED (test process exited with\n> exit code 2) 11 ms\n> polymorphism ... FAILED (test process exited with\n> exit code 2) 13 ms\n> rowtypes ... FAILED (test process exited with\n> exit code 2) 10 ms\n> returning ... FAILED (test process exited with\n> exit code 2) 11 ms\n> largeobject ... FAILED (test process exited with\n> exit code 2) 10 ms\n> with ... FAILED (test process exited with\n> exit code 2) 22 ms\n> xml ... FAILED (test process exited with\n> exit code 2) 23 ms\n> schema_variables ... FAILED (test process exited with\n> exit code 2) 23 ms\n> parallel group (11 tests): explain hash_part partition_info reloptions\n> memoize compression partition_aggregate partition_join indexing\n> partition_prune tuplesort\n> partition_join ... FAILED 902 ms\n> partition_prune ... ok 1006 ms\n> reloptions ... ok 106 ms\n> hash_part ... ok 99 ms\n> indexing ... ok 929 ms\n> partition_aggregate ... ok 791 ms\n> partition_info ... ok 104 ms\n> tuplesort ... ok 1099 ms\n> explain ... ok 90 ms\n> compression ... ok 214 ms\n> memoize ... ok 109 ms\n> parallel group (2 tests): event_trigger oidjoins\n> event_trigger ... ok 107 ms\n> oidjoins ... ok 157 ms\n> test fast_default ... ok 138 ms\n> test stats ... ok 617 ms\n>\n>\n>\n> On 9/9/21 6:59 AM, Pavel Stehule wrote:\n> > Hi\n> >\n> > fresh rebase\n> >\n> > Regards\n> >\n> > Pavel\n> >\n>\n\nHičt 9. 9. 2021 v 12:21 odesílatel Erik Rijkers <er@xs4all.nl> napsal: > [schema-variables-20210909.patch]\n\r\nHi Pavel,\n\r\nThe patch applies and compiles fine but 'make check' for the \r\nassert-enabled fails on 131 out of 210 tests.This morning I tested it. I'll recheck it.Pavel\n\r\n(while compiling HEAD checks run without errors for both assert-disabled \r\nand assert-enabled)\n\n\r\nErik Rijkers\n\n\r\ntest tablespace                   ... ok          303 ms\r\nparallel group (20 tests):  oid char pg_lsn int2 varchar txid int4 \r\nregproc uuid float4 text name money boolean bit float8 int8 enum numeric \r\nrangetypes\r\n      boolean                      ... ok          112 ms\r\n      char                         ... ok           57 ms\r\n      name                         ... ok          106 ms\r\n      varchar                      ... ok           74 ms\r\n      text                         ... ok          106 ms\r\n      int2                         ... ok           73 ms\r\n      int4                         ... ok           92 ms\r\n      int8                         ... ok          130 ms\r\n      oid                          ... ok           55 ms\r\n      float4                       ... ok          102 ms\r\n      float8                       ... ok          126 ms\r\n      bit                          ... ok          124 ms\r\n      numeric                      ... ok          362 ms\r\n      txid                         ... ok           87 ms\r\n      uuid                         ... ok          100 ms\r\n      enum                         ... ok          142 ms\r\n      money                        ... ok          109 ms\r\n      rangetypes                   ... ok          433 ms\r\n      pg_lsn                       ... ok           64 ms\r\n      regproc                      ... ok           91 ms\r\nparallel group (20 tests):  lseg path circle time macaddr \r\ncreate_function_0 timetz line macaddr8 numerology point interval inet \r\ndate strings polygon box multirangetypes timestamp timestamptz\r\n      strings                      ... ok          166 ms\r\n      numerology                   ... ok           89 ms\r\n      point                        ... ok           96 ms\r\n      lseg                         ... ok           35 ms\r\n      line                         ... ok           70 ms\r\n      box                          ... ok          255 ms\r\n      path                         ... ok           50 ms\r\n      polygon                      ... ok          237 ms\r\n      circle                       ... ok           53 ms\r\n      date                         ... ok          127 ms\r\n      time                         ... ok           60 ms\r\n      timetz                       ... ok           67 ms\r\n      timestamp                    ... ok          379 ms\r\n      timestamptz                  ... ok          413 ms\r\n      interval                     ... ok           97 ms\r\n      inet                         ... ok          118 ms\r\n      macaddr                      ... ok           60 ms\r\n      macaddr8                     ... ok           80 ms\r\n      multirangetypes              ... ok          307 ms\r\n      create_function_0            ... ok           63 ms\r\nparallel group (12 tests):  comments unicode misc_sanity tstypes xid \r\nexpressions horology geometry mvcc type_sanity regex opr_sanity\r\n      geometry                     ... ok          140 ms\r\n      horology                     ... ok          120 ms\r\n      tstypes                      ... ok           53 ms\r\n      regex                        ... ok          335 ms\r\n      type_sanity                  ... ok          155 ms\r\n      opr_sanity                   ... ok          355 ms\r\n      misc_sanity                  ... ok           43 ms\r\n      comments                     ... ok           20 ms\r\n      expressions                  ... ok          100 ms\r\n      unicode                      ... ok           25 ms\r\n      xid                          ... ok           56 ms\r\n      mvcc                         ... ok          146 ms\r\ntest create_function_1            ... ok           10 ms\r\ntest create_type                  ... ok           30 ms\r\ntest create_table                 ... ok          333 ms\r\ntest create_function_2            ... ok           11 ms\r\nparallel group (5 tests):  copydml copyselect insert_conflict insert copy\r\n      copy                         ... ok          336 ms\r\n      copyselect                   ... ok           34 ms\r\n      copydml                      ... ok           28 ms\r\n      insert                       ... ok          291 ms\r\n      insert_conflict              ... FAILED (test process exited with \r\nexit code 2)      239 ms\r\nparallel group (3 tests):  create_operator create_procedure create_misc\r\n      create_misc                  ... ok          131 ms\r\n      create_operator              ... ok           29 ms\r\n      create_procedure             ... ok           52 ms\r\nparallel group (5 tests):  create_view create_index_spgist \r\nindex_including create_index index_including_gist\r\n      create_index                 ... FAILED (test process exited with \r\nexit code 2)     3801 ms\r\n      create_index_spgist          ... ok          523 ms\r\n      create_view                  ... FAILED (test process exited with \r\nexit code 2)      339 ms\r\n      index_including              ... FAILED (test process exited with \r\nexit code 2)     3801 ms\r\n      index_including_gist         ... FAILED (test process exited with \r\nexit code 2)     3801 ms\r\nparallel group (16 tests):  create_aggregate create_cast typed_table \r\ndrop_if_exists roleattributes create_am hash_func updatable_views errors \r\ninfinite_recurse create_function_3 triggers constraints select inherit \r\nvacuum\r\n      create_aggregate             ... FAILED (test process exited with \r\nexit code 2)      164 ms\r\n      create_function_3            ... FAILED (test process exited with \r\nexit code 2)      164 ms\r\n      create_cast                  ... FAILED (test process exited with \r\nexit code 2)      164 ms\r\n      constraints                  ... FAILED (test process exited with \r\nexit code 2)      181 ms\r\n      triggers                     ... FAILED (test process exited with \r\nexit code 2)      181 ms\r\n      select                       ... FAILED (test process exited with \r\nexit code 2)      181 ms\r\n      inherit                      ... FAILED (test process exited with \r\nexit code 2)      181 ms\r\n      typed_table                  ... FAILED (test process exited with \r\nexit code 2)      163 ms\r\n      vacuum                       ... FAILED (test process exited with \r\nexit code 2)      180 ms\r\n      drop_if_exists               ... FAILED (test process exited with \r\nexit code 2)      163 ms\r\n      updatable_views              ... FAILED (test process exited with \r\nexit code 2)      163 ms\r\n      roleattributes               ... FAILED (test process exited with \r\nexit code 2)      163 ms\r\n      create_am                    ... FAILED (test process exited with \r\nexit code 2)      163 ms\r\n      hash_func                    ... FAILED (test process exited with \r\nexit code 2)      162 ms\r\n      errors                       ... FAILED (test process exited with \r\nexit code 2)      162 ms\r\n      infinite_recurse             ... FAILED (test process exited with \r\nexit code 2)      162 ms\r\ntest sanity_check                 ... FAILED (test process exited with \r\nexit code 2)       26 ms\r\nparallel group (20 tests):  select_into subselect select_distinct arrays \r\njoin namespace hash_index select_having portals transactions aggregates \r\nrandom update delete union btree_index select_implicit \r\nselect_distinct_on prepared_xacts case\r\n      select_into                  ... FAILED (test process exited with \r\nexit code 2)       20 ms\r\n      select_distinct              ... FAILED (test process exited with \r\nexit code 2)       21 ms\r\n      select_distinct_on           ... FAILED (test process exited with \r\nexit code 2)       26 ms\r\n      select_implicit              ... FAILED (test process exited with \r\nexit code 2)       26 ms\r\n      select_having                ... FAILED (test process exited with \r\nexit code 2)       23 ms\r\n      subselect                    ... FAILED (test process exited with \r\nexit code 2)       20 ms\r\n      union                        ... FAILED (test process exited with \r\nexit code 2)       25 ms\r\n      case                         ... FAILED (test process exited with \r\nexit code 2)       27 ms\r\n      join                         ... FAILED (test process exited with \r\nexit code 2)       22 ms\r\n      aggregates                   ... FAILED (test process exited with \r\nexit code 2)       24 ms\r\n      transactions                 ... FAILED (test process exited with \r\nexit code 2)       24 ms\r\n      random                       ... failed (ignored) (test process \r\nexited with exit code 2)       24 ms\r\n      portals                      ... FAILED (test process exited with \r\nexit code 2)       23 ms\r\n      arrays                       ... FAILED (test process exited with \r\nexit code 2)       20 ms\r\n      btree_index                  ... FAILED (test process exited with \r\nexit code 2)       25 ms\r\n      hash_index                   ... FAILED (test process exited with \r\nexit code 2)       22 ms\r\n      update                       ... FAILED (test process exited with \r\nexit code 2)       23 ms\r\n      delete                       ... FAILED (test process exited with \r\nexit code 2)       24 ms\r\n      namespace                    ... FAILED (test process exited with \r\nexit code 2)       21 ms\r\n      prepared_xacts               ... FAILED (test process exited with \r\nexit code 2)       25 ms\r\nparallel group (20 tests):  gist brin identity generated password \r\ntablesample lock matview replica_identity rowsecurity security_label \r\nobject_address drop_operator groupingsets join_hash privileges collate \r\ninit_privs spgist gin\r\n      brin                         ... FAILED (test process exited with \r\nexit code 2)       15 ms\r\n      gin                          ... FAILED (test process exited with \r\nexit code 2)       22 ms\r\n      gist                         ... FAILED (test process exited with \r\nexit code 2)       13 ms\r\n      spgist                       ... FAILED (test process exited with \r\nexit code 2)       22 ms\r\n      privileges                   ... FAILED (test process exited with \r\nexit code 2)       19 ms\r\n      init_privs                   ... FAILED (test process exited with \r\nexit code 2)       21 ms\r\n      security_label               ... FAILED (test process exited with \r\nexit code 2)       17 ms\r\n      collate                      ... FAILED (test process exited with \r\nexit code 2)       20 ms\r\n      matview                      ... FAILED (test process exited with \r\nexit code 2)       17 ms\r\n      lock                         ... FAILED (test process exited with \r\nexit code 2)       15 ms\r\n      replica_identity             ... FAILED (test process exited with \r\nexit code 2)       17 ms\r\n      rowsecurity                  ... FAILED (test process exited with \r\nexit code 2)       17 ms\r\n      object_address               ... FAILED (test process exited with \r\nexit code 2)       17 ms\r\n      tablesample                  ... FAILED (test process exited with \r\nexit code 2)       15 ms\r\n      groupingsets                 ... FAILED (test process exited with \r\nexit code 2)       17 ms\r\n      drop_operator                ... FAILED (test process exited with \r\nexit code 2)       17 ms\r\n      password                     ... FAILED (test process exited with \r\nexit code 2)       14 ms\r\n      identity                     ... FAILED (test process exited with \r\nexit code 2)       13 ms\r\n      generated                    ... FAILED (test process exited with \r\nexit code 2)       14 ms\r\n      join_hash                    ... FAILED (test process exited with \r\nexit code 2)       18 ms\r\nparallel group (2 tests):  brin_multi brin_bloom\r\n      brin_bloom                   ... FAILED (test process exited with \r\nexit code 2)        4 ms\r\n      brin_multi                   ... FAILED (test process exited with \r\nexit code 2)        4 ms\r\nparallel group (14 tests):  async create_table_like collate.icu.utf8 \r\nmisc sysviews alter_operator tidscan tidrangescan alter_generic tsrf \r\nincremental_sort misc_functions tid dbsize\r\n      create_table_like            ... FAILED (test process exited with \r\nexit code 2)       10 ms\r\n      alter_generic                ... FAILED (test process exited with \r\nexit code 2)       15 ms\r\n      alter_operator               ... FAILED (test process exited with \r\nexit code 2)       13 ms\r\n      misc                         ... FAILED (test process exited with \r\nexit code 2)       11 ms\r\n      async                        ... FAILED (test process exited with \r\nexit code 2)        9 ms\r\n      dbsize                       ... FAILED (test process exited with \r\nexit code 2)       17 ms\r\n      misc_functions               ... FAILED (test process exited with \r\nexit code 2)       14 ms\r\n      sysviews                     ... FAILED (test process exited with \r\nexit code 2)       12 ms\r\n      tsrf                         ... FAILED (test process exited with \r\nexit code 2)       14 ms\r\n      tid                          ... FAILED (test process exited with \r\nexit code 2)       16 ms\r\n      tidscan                      ... FAILED (test process exited with \r\nexit code 2)       13 ms\r\n      tidrangescan                 ... FAILED (test process exited with \r\nexit code 2)       13 ms\r\n      collate.icu.utf8             ... FAILED (test process exited with \r\nexit code 2)        9 ms\r\n      incremental_sort             ... FAILED (test process exited with \r\nexit code 2)       13 ms\r\nparallel group (6 tests):  amutils psql_crosstab collate.linux.utf8 psql \r\nstats_ext rules\r\n      rules                        ... FAILED (test process exited with \r\nexit code 2)        8 ms\r\n      psql                         ... FAILED (test process exited with \r\nexit code 2)        7 ms\r\n      psql_crosstab                ... FAILED (test process exited with \r\nexit code 2)        7 ms\r\n      amutils                      ... FAILED (test process exited with \r\nexit code 2)        7 ms\r\n      stats_ext                    ... FAILED (test process exited with \r\nexit code 2)        8 ms\r\n      collate.linux.utf8           ... FAILED (test process exited with \r\nexit code 2)        7 ms\r\ntest select_parallel              ... FAILED (test process exited with \r\nexit code 2)        3 ms\r\ntest write_parallel               ... FAILED (test process exited with \r\nexit code 2)        3 ms\r\nparallel group (2 tests):  publication subscription\r\n      publication                  ... FAILED (test process exited with \r\nexit code 2)        4 ms\r\n      subscription                 ... FAILED (test process exited with \r\nexit code 2)        4 ms\r\nparallel group (17 tests):  select_views foreign_key xmlmap window \r\nfunctional_deps tsearch cluster combocid bitmapops tsdicts equivclass \r\nguc indirect_toast advisory_lock foreign_data dependency portals_p2\r\n      select_views                 ... FAILED (test process exited with \r\nexit code 2)       11 ms\r\n      portals_p2                   ... FAILED (test process exited with \r\nexit code 2)       20 ms\r\n      foreign_key                  ... FAILED (test process exited with \r\nexit code 2)       11 ms\r\n      cluster                      ... FAILED (test process exited with \r\nexit code 2)       15 ms\r\n      dependency                   ... FAILED (test process exited with \r\nexit code 2)       19 ms\r\n      guc                          ... FAILED (test process exited with \r\nexit code 2)       16 ms\r\n      bitmapops                    ... FAILED (test process exited with \r\nexit code 2)       16 ms\r\n      combocid                     ... FAILED (test process exited with \r\nexit code 2)       15 ms\r\n      tsearch                      ... FAILED (test process exited with \r\nexit code 2)       14 ms\r\n      tsdicts                      ... FAILED (test process exited with \r\nexit code 2)       16 ms\r\n      foreign_data                 ... FAILED (test process exited with \r\nexit code 2)       17 ms\r\n      window                       ... FAILED (test process exited with \r\nexit code 2)       12 ms\r\n      xmlmap                       ... FAILED (test process exited with \r\nexit code 2)       12 ms\r\n      functional_deps              ... FAILED (test process exited with \r\nexit code 2)       13 ms\r\n      advisory_lock                ... FAILED (test process exited with \r\nexit code 2)       16 ms\r\n      indirect_toast               ... FAILED (test process exited with \r\nexit code 2)       16 ms\r\n      equivclass                   ... FAILED (test process exited with \r\nexit code 2)       15 ms\r\nparallel group (6 tests):  json json_encoding jsonb jsonpath_encoding \r\njsonb_jsonpath jsonpath\r\n      json                         ... FAILED (test process exited with \r\nexit code 2)        4 ms\r\n      jsonb                        ... FAILED (test process exited with \r\nexit code 2)        6 ms\r\n      json_encoding                ... FAILED (test process exited with \r\nexit code 2)        5 ms\r\n      jsonpath                     ... FAILED (test process exited with \r\nexit code 2)       10 ms\r\n      jsonpath_encoding            ... FAILED (test process exited with \r\nexit code 2)        6 ms\r\n      jsonb_jsonpath               ... FAILED (test process exited with \r\nexit code 2)        7 ms\r\nparallel group (19 tests):  plpgsql limit rowtypes sequence largeobject \r\nreturning domain polymorphism plancache prepare alter_table truncate \r\ntemp rangefuncs with copy2 conversion schema_variables xml\r\n      plancache                    ... FAILED (test process exited with \r\nexit code 2)       16 ms\r\n      limit                        ... FAILED (test process exited with \r\nexit code 2)       10 ms\r\n      plpgsql                      ... FAILED (test process exited with \r\nexit code 2)        7 ms\r\n      copy2                        ... FAILED (test process exited with \r\nexit code 2)       25 ms\r\n      temp                         ... FAILED (test process exited with \r\nexit code 2)       21 ms\r\n      domain                       ... FAILED (test process exited with \r\nexit code 2)       13 ms\r\n      rangefuncs                   ... FAILED (test process exited with \r\nexit code 2)       22 ms\r\n      prepare                      ... FAILED (test process exited with \r\nexit code 2)       19 ms\r\n      conversion                   ... FAILED (test process exited with \r\nexit code 2)       24 ms\r\n      truncate                     ... FAILED (test process exited with \r\nexit code 2)       19 ms\r\n      alter_table                  ... FAILED (test process exited with \r\nexit code 2)       18 ms\r\n      sequence                     ... FAILED (test process exited with \r\nexit code 2)       11 ms\r\n      polymorphism                 ... FAILED (test process exited with \r\nexit code 2)       13 ms\r\n      rowtypes                     ... FAILED (test process exited with \r\nexit code 2)       10 ms\r\n      returning                    ... FAILED (test process exited with \r\nexit code 2)       11 ms\r\n      largeobject                  ... FAILED (test process exited with \r\nexit code 2)       10 ms\r\n      with                         ... FAILED (test process exited with \r\nexit code 2)       22 ms\r\n      xml                          ... FAILED (test process exited with \r\nexit code 2)       23 ms\r\n      schema_variables             ... FAILED (test process exited with \r\nexit code 2)       23 ms\r\nparallel group (11 tests):  explain hash_part partition_info reloptions \r\nmemoize compression partition_aggregate partition_join indexing \r\npartition_prune tuplesort\r\n      partition_join               ... FAILED      902 ms\r\n      partition_prune              ... ok         1006 ms\r\n      reloptions                   ... ok          106 ms\r\n      hash_part                    ... ok           99 ms\r\n      indexing                     ... ok          929 ms\r\n      partition_aggregate          ... ok          791 ms\r\n      partition_info               ... ok          104 ms\r\n      tuplesort                    ... ok         1099 ms\r\n      explain                      ... ok           90 ms\r\n      compression                  ... ok          214 ms\r\n      memoize                      ... ok          109 ms\r\nparallel group (2 tests):  event_trigger oidjoins\r\n      event_trigger                ... ok          107 ms\r\n      oidjoins                     ... ok          157 ms\r\ntest fast_default                 ... ok          138 ms\r\ntest stats                        ... ok          617 ms\n\n\n\r\nOn 9/9/21 6:59 AM, Pavel Stehule wrote:\r\n> Hi\r\n> \r\n> fresh rebase\r\n> \r\n> Regards\r\n> \r\n> Pavel\r\n>", "msg_date": "Thu, 9 Sep 2021 12:40:23 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Le 09/09/2021 à 11:40, Pavel Stehule a écrit :\n> Hi\n>\n> čt 9. 9. 2021 v 12:21 odesílatel Erik Rijkers <er@xs4all.nl \n> <mailto:er@xs4all.nl>> napsal:\n>\n>  > [schema-variables-20210909.patch]\n>\n> Hi Pavel,\n>\n> The patch applies and compiles fine but 'make check' for the\n> assert-enabled fails on 131 out of 210 tests.\n>\n>\n> This morning I tested it. I'll recheck it.\n>\n> Pavel\n>\n\nI had not this problem yesterday.\n\n\n-- \nGilles Darold\nhttp://www.darold.net/\n\n\n\n\n\n\n\nLe 09/09/2021 à 11:40, Pavel Stehule a\n écrit :\n\n\n\n\nHi\n\n\n\nčt 9. 9. 2021 v 12:21\n odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n\n >\n [schema-variables-20210909.patch]\n\n Hi Pavel,\n\n The patch applies and compiles fine but 'make check' for the\n \n assert-enabled fails on 131 out of 210 tests.\n\n\n\nThis morning I tested it. I'll recheck it.\n\n\nPavel\n\n\n\n\n\n\n\nI had not this problem yesterday.\n\n\n-- \nGilles Darold\nhttp://www.darold.net/", "msg_date": "Thu, 9 Sep 2021 12:17:31 +0100", "msg_from": "Gilles Darold <gilles@darold.net>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "čt 9. 9. 2021 v 13:17 odesílatel Gilles Darold <gilles@darold.net> napsal:\n\n> Le 09/09/2021 à 11:40, Pavel Stehule a écrit :\n>\n> Hi\n>\n> čt 9. 9. 2021 v 12:21 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n>\n>> > [schema-variables-20210909.patch]\n>>\n>> Hi Pavel,\n>>\n>> The patch applies and compiles fine but 'make check' for the\n>> assert-enabled fails on 131 out of 210 tests.\n>>\n>\n> This morning I tested it. I'll recheck it.\n>\n> Pavel\n>\n>\n> I had not this problem yesterday.\n>\n\nI am able to reproduce it. Looks like some current changes of Nodes don't\nwork with this patch. I have to investigate it.\n\nRegards\n\nPavel\n\n\n> --\n> Gilles Daroldhttp://www.darold.net/\n>\n>\n\nčt 9. 9. 2021 v 13:17 odesílatel Gilles Darold <gilles@darold.net> napsal:\n\nLe 09/09/2021 à 11:40, Pavel Stehule a\n écrit :\n\n\n\nHi\n\n\n\nčt 9. 9. 2021 v 12:21\n odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n\n >\n [schema-variables-20210909.patch]\n\n Hi Pavel,\n\n The patch applies and compiles fine but 'make check' for the\n \n assert-enabled fails on 131 out of 210 tests.\n\n\n\nThis morning I tested it. I'll recheck it.\n\n\nPavel\n\n\n\n\n\n\n\nI had not this problem yesterday.I am able to reproduce it. Looks like some current changes of Nodes don't work with this patch. I have to investigate it.RegardsPavel\n\n\n-- \nGilles Darold\nhttp://www.darold.net/", "msg_date": "Thu, 9 Sep 2021 18:11:41 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nčt 9. 9. 2021 v 12:21 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n\n> > [schema-variables-20210909.patch]\n>\n> Hi Pavel,\n>\n> The patch applies and compiles fine but 'make check' for the\n> assert-enabled fails on 131 out of 210 tests.\n>\n> (while compiling HEAD checks run without errors for both assert-disabled\n> and assert-enabled)\n>\n>\n\nPlease, check, attached patch. I fixed a routine for processing a list of\nidentifiers - now it works with the identifier's node more sensitive.\nPrevious implementation of strVal was more tolerant.\n\nRegards\n\nPavel\n\n\n>", "msg_date": "Fri, 10 Sep 2021 10:06:04 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On 9/10/21 10:06 AM, Pavel Stehule wrote:\n> Hi\n> \n> čt 9. 9. 2021 v 12:21 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n> \n>>\n>> Hi Pavel,\n>>\n>> The patch applies and compiles fine but 'make check' for the\n>> assert-enabled fails on 131 out of 210 tests.\n>>\n>> (while compiling HEAD checks run without errors for both assert-disabled\n>> and assert-enabled)\n>>\n>>\n> \n> Please, check, attached patch. I fixed a routine for processing a list of\n> identifiers - now it works with the identifier's node more sensitive.\n> Previous implementation of strVal was more tolerant.\n\n > [schema-variables-20210910.patch]\n\nApply, compile, make, & check(-world), and my small testsuite OK.\n\nSo all's well again - Ready for committer!\n\nThanks,\n\nErik Rijkers\n\n\n> Regards\n> \n> Pavel\n> \n> \n>>\n> \n\n\n", "msg_date": "Fri, 10 Sep 2021 10:32:24 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "pá 10. 9. 2021 v 10:32 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n\n> On 9/10/21 10:06 AM, Pavel Stehule wrote:\n> > Hi\n> >\n> > čt 9. 9. 2021 v 12:21 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n> >\n> >>\n> >> Hi Pavel,\n> >>\n> >> The patch applies and compiles fine but 'make check' for the\n> >> assert-enabled fails on 131 out of 210 tests.\n> >>\n> >> (while compiling HEAD checks run without errors for both assert-disabled\n> >> and assert-enabled)\n> >>\n> >>\n> >\n> > Please, check, attached patch. I fixed a routine for processing a list of\n> > identifiers - now it works with the identifier's node more sensitive.\n> > Previous implementation of strVal was more tolerant.\n>\n> > [schema-variables-20210910.patch]\n>\n> Apply, compile, make, & check(-world), and my small testsuite OK.\n>\n> So all's well again - Ready for committer!\n>\n\nThank you for check and for report\n\nRegards\n\nPavel\n\n\n> Thanks,\n>\n> Erik Rijkers\n>\n>\n> > Regards\n> >\n> > Pavel\n> >\n> >\n> >>\n> >\n>\n\npá 10. 9. 2021 v 10:32 odesílatel Erik Rijkers <er@xs4all.nl> napsal:On 9/10/21 10:06 AM, Pavel Stehule wrote:\n> Hi\n> \n> čt 9. 9. 2021 v 12:21 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n> \n>>\n>> Hi Pavel,\n>>\n>> The patch applies and compiles fine but 'make check' for the\n>> assert-enabled fails on 131 out of 210 tests.\n>>\n>> (while compiling HEAD checks run without errors for both assert-disabled\n>> and assert-enabled)\n>>\n>>\n> \n> Please, check, attached patch. I fixed a routine for processing a list of\n> identifiers - now it works with the identifier's node more sensitive.\n> Previous implementation of strVal was more tolerant.\n\n > [schema-variables-20210910.patch]\n\nApply, compile, make, & check(-world), and my small testsuite OK.\n\nSo all's well again - Ready for committer!Thank you for check and for reportRegardsPavel\n\nThanks,\n\nErik Rijkers\n\n\n> Regards\n> \n> Pavel\n> \n> \n>>\n>", "msg_date": "Fri, 10 Sep 2021 10:51:57 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Fri, Sep 10, 2021 at 10:06:04AM +0200, Pavel Stehule wrote:\n> Hi\n> \n> čt 9. 9. 2021 v 12:21 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n> \n> > > [schema-variables-20210909.patch]\n> >\n> > Hi Pavel,\n> >\n> > The patch applies and compiles fine but 'make check' for the\n> > assert-enabled fails on 131 out of 210 tests.\n> >\n> > (while compiling HEAD checks run without errors for both assert-disabled\n> > and assert-enabled)\n> >\n> >\n> \n> Please, check, attached patch. I fixed a routine for processing a list of\n> identifiers - now it works with the identifier's node more sensitive.\n> Previous implementation of strVal was more tolerant.\n> \n\nHi Pavel,\n\nJust noted that there is no support for REASSIGN OWNED BY:\n\n\"\"\"\nregression=# create variable random_number numeric;\nCREATE VARIABLE\nregression=# alter variable random_number owner to jcm;\nALTER VARIABLE\nregression=# reassign owned by jcm to jaime;\nERROR: unexpected classid 9222\n\"\"\"\n\n\nTEMP variables are not schema variables? at least not attached to the\nschema one expects:\n\n\"\"\"\nregression=# create temp variable random_number numeric ;\nCREATE VARIABLE\nregression=# \\dV\n List of variables\n Schema | Name | Type | Is nullable | Is mutable | Default | Owner | Transaction\nal end action\n-----------+---------------+---------+-------------+------------+---------+----------+------------\n--------------\n pg_temp_4 | random_number | numeric | t | t | | jcasanov |\n(1 row)\n\nregression=# select public.random_number;\nERROR: missing FROM-clause entry for table \"public\"\nLINE 1: select public.random_number;\n ^\n\"\"\"\n\nThere was a comment that TEMP variables should be DECLAREd instead of\nCREATEd, i guess that is because those have similar behaviour. At least,\nI would like to see similar messages when using the ON COMMIT DROP\noption in a TEMP variable:\n\n\"\"\"\nregression=# create temp variable random_number numeric on commit drop;\nCREATE VARIABLE\nregression=# \\dV\nDid not find any schema variables.\nregression=# declare q cursor for select 1;\nERROR: DECLARE CURSOR can only be used in transaction blocks\n\"\"\"\n\nAbout that, why are you not using syntax ON COMMIT RESET instead on\ninventing ON TRANSACTION END RESET? seems better because you already use\nON COMMIT DROP.\n\nI will test more this patch tomorrow. Great work, very complete.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Sat, 11 Sep 2021 21:13:38 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\n\n\n> Just noted that there is no support for REASSIGN OWNED BY:\n>\n> \"\"\"\n> regression=# create variable random_number numeric;\n> CREATE VARIABLE\n> regression=# alter variable random_number owner to jcm;\n> ALTER VARIABLE\n> regression=# reassign owned by jcm to jaime;\n> ERROR: unexpected classid 9222\n> \"\"\"\n>\n>\nshould be fixed by the attached patch, please check.\n\n\n\n> TEMP variables are not schema variables? at least not attached to the\n> schema one expects:\n>\n\ntemp variables are schema variables like any other. But they are created in\ntemp schema - like temp tables.\nI designed it in consistency with temporary tables.\n\n\n> \"\"\"\n> regression=# create temp variable random_number numeric ;\n> CREATE VARIABLE\n> regression=# \\dV\n> List of variables\n> Schema | Name | Type | Is nullable | Is mutable | Default\n> | Owner | Transaction\n> al end action\n>\n> -----------+---------------+---------+-------------+------------+---------+----------+------------\n> --------------\n> pg_temp_4 | random_number | numeric | t | t |\n> | jcasanov |\n> (1 row)\n>\n> regression=# select public.random_number;\n> ERROR: missing FROM-clause entry for table \"public\"\n> LINE 1: select public.random_number;\n> ^\n> \"\"\"\n>\n> There was a comment that TEMP variables should be DECLAREd instead of\n> CREATEd, i guess that is because those have similar behaviour. At least,\n> I would like to see similar messages when using the ON COMMIT DROP\n> option in a TEMP variable:\n>\n\nI don't remember this comment. When I talked about similarity with the\nDECLARE statement, I thought about semantic similarity with T-SQL\n(Microsoft SQL) DECLARE command. Unfortunately, DECLARE command is pretty\nmessy - it exists in SQL, it exists in SQL/PSM and it exists in T-SQL - and\nevery time has similar syntax, but partially different semantics. For me -\nCREATE TEMP VARIABLE creates session's life limited variable (by default),\nsimilarly like DECLARE @localvariable command from T-SQL.\n\n\n> \"\"\"\n> regression=# create temp variable random_number numeric on commit drop;\n> CREATE VARIABLE\n> regression=# \\dV\n> Did not find any schema variables.\n> regression=# declare q cursor for select 1;\n> ERROR: DECLARE CURSOR can only be used in transaction blocks\n> \"\"\"\n>\n\nI have different result\n\npostgres=# create temp variable random_number numeric on commit drop;\nCREATE VARIABLE\npostgres=# \\dV\n List of variables\n┌────────┬───────────────┬─────────┬─────────────┬────────────┬─────────┬───────┬──────────────────────────┐\n│ Schema │ Name │ Type │ Is nullable │ Is mutable │ Default │\nOwner │ Transactional end action │\n╞════════╪═══════════════╪═════════╪═════════════╪════════════╪═════════╪═══════╪══════════════════════════╡\n│ public │ random_number │ numeric │ t │ t │ │\ntom2 │ │\n└────────┴───────────────┴─────────┴─────────────┴────────────┴─────────┴───────┴──────────────────────────┘\n(1 row)\n\n\n\n> About that, why are you not using syntax ON COMMIT RESET instead on\n> inventing ON TRANSACTION END RESET? seems better because you already use\n> ON COMMIT DROP.\n>\n\nI thought about this question for a very long time, and I think the\nintroduction of a new clause is better, and I will try to explain why.\n\nOne part of this patch are DDL statements - and all DDL statements are\nconsistent with other DDL statements in Postgres. Schema variables DDL\ncommands are transactional and for TEMP variables we can specify a scope -\nsession or transaction, and then clause ON COMMIT DROP is used. You should\nnot need to specify ON ROLLBACK action, because in this case an removing\nfrom system catalogue is only one possible action.\n\nSecond part of this patch is holding some value in schema variables or\ninitialization with default expression. The default behaviour is not\ntransactional, and the value is stored all session's time by default. But I\nthink it can be very useful to enforce initialization in some specific\ntimes - now only the end of the transaction is possible to specify. In the\nfuture there can be transaction end, transaction start, rollback, commit,\ntop query start, top query end, ... This logic is different from the logic\nof DDL commands. For DDL commands I need to specify behaviour just for the\nCOMMIT end. But for reset of non-transactional schema variables I need to\nspecify any possible end of transaction - COMMIT, ROLLBACK or COMMIT or\nROLLBACK. In this initial version I implemented \"ON COMMIT OR ROLLBACK\nRESET\", and although it is clean I think it is more readable is the clause\nthat I invented \"ON TRANSACTION END\". \"ON COMMIT RESET\" is not exact. \"ON\nCOMMIT OR ROLLBACK RESET\" sounds a little bit strange for me, but we use\nsomething similar in trigger definition \"ON INSERT OR UPDATE OR DELETE ...\"\nMy opinion is not too strong if \"ON TRANSACTION END RESET\" or \"ON COMMIT\nOR ROLLBACK RESET\" is better, and I can change it if people will have\ndifferent preferences, but I am sure so \"ON COMMIT RESET\" is not correct in\nimplemented case. And from the perspective of a PLpgSQL developer, I would\nhave initialized the variable on any transaction start, so I need to reset\nit on any end.\n\nRegards\n\nPavel\n\n\n\n> I will test more this patch tomorrow. Great work, very complete.\n>\n> --\n> Jaime Casanova\n> Director de Servicios Profesionales\n> SystemGuards - Consultores de PostgreSQL\n>", "msg_date": "Sun, 12 Sep 2021 17:38:42 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "ne 12. 9. 2021 v 17:38 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n>\n>\n>> Just noted that there is no support for REASSIGN OWNED BY:\n>>\n>> \"\"\"\n>> regression=# create variable random_number numeric;\n>> CREATE VARIABLE\n>> regression=# alter variable random_number owner to jcm;\n>> ALTER VARIABLE\n>> regression=# reassign owned by jcm to jaime;\n>> ERROR: unexpected classid 9222\n>> \"\"\"\n>>\n>>\n> should be fixed by the attached patch, please check.\n>\n>\n>\n>> TEMP variables are not schema variables? at least not attached to the\n>> schema one expects:\n>>\n>\n> temp variables are schema variables like any other. But they are created\n> in temp schema - like temp tables.\n> I designed it in consistency with temporary tables.\n>\n>\n>> \"\"\"\n>> regression=# create temp variable random_number numeric ;\n>> CREATE VARIABLE\n>> regression=# \\dV\n>> List of variables\n>> Schema | Name | Type | Is nullable | Is mutable | Default\n>> | Owner | Transaction\n>> al end action\n>>\n>> -----------+---------------+---------+-------------+------------+---------+----------+------------\n>> --------------\n>> pg_temp_4 | random_number | numeric | t | t |\n>> | jcasanov |\n>> (1 row)\n>>\n>> regression=# select public.random_number;\n>> ERROR: missing FROM-clause entry for table \"public\"\n>> LINE 1: select public.random_number;\n>> ^\n>> \"\"\"\n>>\n>> There was a comment that TEMP variables should be DECLAREd instead of\n>> CREATEd, i guess that is because those have similar behaviour. At least,\n>> I would like to see similar messages when using the ON COMMIT DROP\n>> option in a TEMP variable:\n>>\n>\n> I don't remember this comment. When I talked about similarity with the\n> DECLARE statement, I thought about semantic similarity with T-SQL\n> (Microsoft SQL) DECLARE command. Unfortunately, DECLARE command is pretty\n> messy - it exists in SQL, it exists in SQL/PSM and it exists in T-SQL - and\n> every time has similar syntax, but partially different semantics. For me -\n> CREATE TEMP VARIABLE creates session's life limited variable (by default),\n> similarly like DECLARE @localvariable command from T-SQL.\n>\n\nany value of a schema variable has a session (or transaction) life cycle.\nBut the schema variable itself is persistent. temp schema variable is an\nexception. It is limited by session (and the value stored in the variable\nis limited to session too).\n\n\n>\n>> \"\"\"\n>> regression=# create temp variable random_number numeric on commit drop;\n>> CREATE VARIABLE\n>> regression=# \\dV\n>> Did not find any schema variables.\n>> regression=# declare q cursor for select 1;\n>> ERROR: DECLARE CURSOR can only be used in transaction blocks\n>> \"\"\"\n>>\n>\n> I have different result\n>\n> postgres=# create temp variable random_number numeric on commit drop;\n> CREATE VARIABLE\n> postgres=# \\dV\n> List of variables\n>\n> ┌────────┬───────────────┬─────────┬─────────────┬────────────┬─────────┬───────┬──────────────────────────┐\n> │ Schema │ Name │ Type │ Is nullable │ Is mutable │ Default │\n> Owner │ Transactional end action │\n>\n> ╞════════╪═══════════════╪═════════╪═════════════╪════════════╪═════════╪═══════╪══════════════════════════╡\n> │ public │ random_number │ numeric │ t │ t │ │\n> tom2 │ │\n>\n> └────────┴───────────────┴─────────┴─────────────┴────────────┴─────────┴───────┴──────────────────────────┘\n> (1 row)\n>\n>\n>\n>> About that, why are you not using syntax ON COMMIT RESET instead on\n>> inventing ON TRANSACTION END RESET? seems better because you already use\n>> ON COMMIT DROP.\n>>\n>\n> I thought about this question for a very long time, and I think the\n> introduction of a new clause is better, and I will try to explain why.\n>\n> One part of this patch are DDL statements - and all DDL statements are\n> consistent with other DDL statements in Postgres. Schema variables DDL\n> commands are transactional and for TEMP variables we can specify a scope -\n> session or transaction, and then clause ON COMMIT DROP is used. You should\n> not need to specify ON ROLLBACK action, because in this case an removing\n> from system catalogue is only one possible action.\n>\n> Second part of this patch is holding some value in schema variables or\n> initialization with default expression. The default behaviour is not\n> transactional, and the value is stored all session's time by default. But I\n> think it can be very useful to enforce initialization in some specific\n> times - now only the end of the transaction is possible to specify. In the\n> future there can be transaction end, transaction start, rollback, commit,\n> top query start, top query end, ... This logic is different from the logic\n> of DDL commands. For DDL commands I need to specify behaviour just for the\n> COMMIT end. But for reset of non-transactional schema variables I need to\n> specify any possible end of transaction - COMMIT, ROLLBACK or COMMIT or\n> ROLLBACK. In this initial version I implemented \"ON COMMIT OR ROLLBACK\n> RESET\", and although it is clean I think it is more readable is the clause\n> that I invented \"ON TRANSACTION END\". \"ON COMMIT RESET\" is not exact. \"ON\n> COMMIT OR ROLLBACK RESET\" sounds a little bit strange for me, but we use\n> something similar in trigger definition \"ON INSERT OR UPDATE OR DELETE ...\"\n> My opinion is not too strong if \"ON TRANSACTION END RESET\" or \"ON COMMIT\n> OR ROLLBACK RESET\" is better, and I can change it if people will have\n> different preferences, but I am sure so \"ON COMMIT RESET\" is not correct in\n> implemented case. And from the perspective of a PLpgSQL developer, I would\n> have initialized the variable on any transaction start, so I need to reset\n> it on any end.\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>> I will test more this patch tomorrow. Great work, very complete.\n>>\n>> --\n>> Jaime Casanova\n>> Director de Servicios Profesionales\n>> SystemGuards - Consultores de PostgreSQL\n>>\n>\n\nne 12. 9. 2021 v 17:38 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:Hi\n\nJust noted that there is no support for REASSIGN OWNED BY:\n\n\"\"\"\nregression=# create variable random_number numeric;\nCREATE VARIABLE\nregression=# alter variable random_number owner to jcm;\nALTER VARIABLE\nregression=# reassign owned by jcm to jaime;\nERROR:  unexpected classid 9222\n\"\"\"\nshould be fixed by the attached patch, please check. \n\nTEMP variables are not schema variables? at least not attached to the\nschema one expects:temp variables are schema variables like any other. But they are created in temp schema - like temp tables.I designed it in consistency with temporary tables. \n\n\"\"\"\nregression=# create temp variable random_number numeric ;\nCREATE VARIABLE\nregression=# \\dV\n                                               List of variables\n  Schema   |     Name      |  Type   | Is nullable | Is mutable | Default |  Owner   | Transaction\nal end action\n-----------+---------------+---------+-------------+------------+---------+----------+------------\n--------------\n pg_temp_4 | random_number | numeric | t           | t          |         | jcasanov |\n(1 row)\n\nregression=# select public.random_number;\nERROR:  missing FROM-clause entry for table \"public\"\nLINE 1: select public.random_number;\n               ^\n\"\"\"\n\nThere was a comment that TEMP variables should be DECLAREd instead of\nCREATEd, i guess that is because those have similar behaviour. At least,\nI would like to see similar messages when using the ON COMMIT DROP\noption in a TEMP variable:I don't remember this comment. When I talked about similarity with the DECLARE statement, I thought about semantic similarity with T-SQL (Microsoft SQL) DECLARE command. Unfortunately, DECLARE command is pretty messy - it exists in SQL, it exists in SQL/PSM and it exists in T-SQL - and every time has similar syntax, but partially different semantics. For me - CREATE TEMP VARIABLE creates session's life limited variable (by default), similarly like DECLARE @localvariable command from T-SQL. any value of a schema variable has a session (or transaction) life cycle. But the schema variable itself is persistent.  temp schema variable is an exception. It is limited by session (and the value stored in the variable is limited to session too).\n\n\"\"\"\nregression=# create temp variable random_number numeric on commit drop;\nCREATE VARIABLE\nregression=# \\dV\nDid not find any schema variables.\nregression=# declare q cursor  for select 1;\nERROR:  DECLARE CURSOR can only be used in transaction blocks\n\"\"\"I have different result postgres=# create temp variable random_number numeric on commit drop;CREATE VARIABLEpostgres=# \\dV                                             List of variables┌────────┬───────────────┬─────────┬─────────────┬────────────┬─────────┬───────┬──────────────────────────┐│ Schema │     Name      │  Type   │ Is nullable │ Is mutable │ Default │ Owner │ Transactional end action │╞════════╪═══════════════╪═════════╪═════════════╪════════════╪═════════╪═══════╪══════════════════════════╡│ public │ random_number │ numeric │ t           │ t          │         │ tom2  │                          │└────────┴───────────────┴─────────┴─────────────┴────────────┴─────────┴───────┴──────────────────────────┘(1 row)\n\nAbout that, why are you not using syntax ON COMMIT RESET instead on\ninventing ON TRANSACTION END RESET? seems better because you already use\nON COMMIT DROP.I thought about this question for a very long time, and I think the introduction of a new clause is better, and I will try to explain why.One part of this patch are DDL statements - and all DDL statements are consistent with other DDL statements in Postgres. Schema variables DDL commands are transactional and for TEMP variables we can specify a scope - session or transaction, and then clause ON COMMIT DROP is used. You should not need to specify ON ROLLBACK action, because in this case an removing from system catalogue is only one possible action.Second part of this patch is holding some value in schema variables or initialization with default expression. The default behaviour is not transactional, and the value is stored all session's time by default. But I think it can be very useful to enforce initialization in some specific times - now only the end of the transaction is possible to specify. In the future there can be transaction end, transaction start, rollback, commit, top query start, top query end, ... This logic is different from the logic of DDL commands.  For DDL commands I need to specify behaviour just for the COMMIT end. But for reset of non-transactional schema variables I need to specify any possible end of transaction - COMMIT, ROLLBACK or COMMIT or ROLLBACK. In this initial version I implemented \"ON COMMIT OR ROLLBACK RESET\", and although it is clean I think it is more readable is the clause that I invented \"ON TRANSACTION END\". \"ON COMMIT RESET\" is not exact. \"ON COMMIT OR ROLLBACK RESET\" sounds a little bit strange for me, but we use something similar in trigger definition \"ON INSERT OR UPDATE OR DELETE ...\" My opinion is not too strong if \"ON TRANSACTION END  RESET\"  or \"ON COMMIT OR ROLLBACK RESET\" is better, and I can change it if people will have different preferences, but I am sure so \"ON COMMIT RESET\" is not correct in implemented case. And from the perspective of a PLpgSQL developer, I would have initialized the variable on any transaction start, so I need to reset it on any end.RegardsPavel\n\nI will test more this patch tomorrow. Great work, very complete.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL", "msg_date": "Sun, 12 Sep 2021 17:45:25 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Sun, Sep 12, 2021 at 05:38:42PM +0200, Pavel Stehule wrote:\n> Hi\n> \n> > \"\"\"\n> > regression=# create temp variable random_number numeric on commit drop;\n> > CREATE VARIABLE\n> > regression=# \\dV\n> > Did not find any schema variables.\n> > regression=# declare q cursor for select 1;\n> > ERROR: DECLARE CURSOR can only be used in transaction blocks\n> > \"\"\"\n> >\n> \n> I have different result\n> \n> postgres=# create temp variable random_number numeric on commit drop;\n> CREATE VARIABLE\n> postgres=# \\dV\n> List of variables\n> ┌────────┬───────────────┬─────────┬─────────────┬────────────┬─────────┬───────┬──────────────────────────┐\n> │ Schema │ Name │ Type │ Is nullable │ Is mutable │ Default │\n> Owner │ Transactional end action │\n> ╞════════╪═══════════════╪═════════╪═════════════╪════════════╪═════════╪═══════╪══════════════════════════╡\n> │ public │ random_number │ numeric │ t │ t │ │\n> tom2 │ │\n> └────────┴───────────────┴─────────┴─────────────┴────────────┴─────────┴───────┴──────────────────────────┘\n> (1 row)\n> \n> \n> \n\nHi, \n\nThanks, will test rebased version.\nBTW, that is not the temp variable. You can note it because of the\nschema or the lack of a \"Transaction end action\". That is a normal\nnon-temp variable that has been created before. A TEMP variable with an\nON COMMIT DROP created outside an explicit transaction will disappear\nimmediatly like cursor does in the same situation.\n\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Sun, 12 Sep 2021 11:26:00 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n>\n> Thanks, will test rebased version.\n> BTW, that is not the temp variable. You can note it because of the\n> schema or the lack of a \"Transaction end action\". That is a normal\n> non-temp variable that has been created before. A TEMP variable with an\n> ON COMMIT DROP created outside an explicit transaction will disappear\n> immediatly like cursor does in the same situation.\n>\n\nUnfortunately, I don't see it - or I don't understand to your example from\nmorning mail well\n\n\"\"\"\nregression=# create temp variable random_number numeric ;\nCREATE VARIABLE\nregression=# \\dV\n List of variables\n Schema | Name | Type | Is nullable | Is mutable | Default\n| Owner | Transaction\nal end action\n-----------+---------------+---------+-------------+------------+---------+----------+------------\n--------------\n pg_temp_4 | random_number | numeric | t | t | |\njcasanov |\n(1 row)\n\nregression=# select public.random_number;\nERROR: missing FROM-clause entry for table \"public\"\nLINE 1: select public.random_number;\n ^\n\"\"\"\n\n\n>\n>\n> --\n> Jaime Casanova\n> Director de Servicios Profesionales\n> SystemGuards - Consultores de PostgreSQL\n>\n\n\nHi, \n\nThanks, will test rebased version.\nBTW, that is not the temp variable. You can note it because of the\nschema or the lack of a \"Transaction end action\". That is a normal\nnon-temp variable that has been created before. A TEMP variable with an\nON COMMIT DROP created outside an explicit transaction will disappear\nimmediatly like cursor does in the same situation.Unfortunately, I don't see it - or I don't understand to your example from morning mail well\"\"\"\nregression=# create temp variable random_number numeric ;\nCREATE VARIABLE\nregression=# \\dV\n                                               List of variables\n  Schema   |     Name      |  Type   | Is nullable | Is mutable | Default |  Owner   | Transaction\nal end action\n-----------+---------------+---------+-------------+------------+---------+----------+------------\n--------------\n pg_temp_4 | random_number | numeric | t           | t          |         | jcasanov |\n(1 row)\n\nregression=# select public.random_number;\nERROR:  missing FROM-clause entry for table \"public\"\nLINE 1: select public.random_number;\n               ^\n\"\"\"  \n\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL", "msg_date": "Sun, 12 Sep 2021 19:15:05 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\njust rebase of patch\n\nRegards\n\nPavel", "msg_date": "Thu, 16 Sep 2021 07:15:31 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nonly rebase\n\nRegards\n\nPavel\n\nčt 16. 9. 2021 v 7:15 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> just rebase of patch\n>\n> Regards\n>\n> Pavel\n>\n>", "msg_date": "Sat, 30 Oct 2021 08:35:03 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nI took a quick look at the latest patch version. In general the patch\nlooks pretty complete and clean, and for now I have only some basic\ncomments. The attached patch tweaks some of this, along with a couple\nadditional minor changes that I'll not discuss here.\n\n\n1) Not sure why we need to call this \"schema variables\". Most objects\nare placed in a schema, and we don't say \"schema tables\" for example.\nAnd it's CREATE VARIABLE and not CREATE SCHEMA VARIABLE, so it's a bit\ninconsistent.\n\nThe docs actually use \"Global variables\" in one place for some reason.\n\n\n2) I find this a bit confusing:\n\nSELECT non_existent_variable;\ntest=# select s;\nERROR: column \"non_existent_variable\" does not exist\nLINE 1: select non_existent_variable;\n\nI wonder if this means using SELECT to read variables is a bad idea, and\nwe should have a separate command, just like we have LET (instead of\njust using UPDATE in some way).\n\n\n3) I've reworded / tweaked a couple places in the docs, but this really\nneeds a native speaker - I don't have a very good \"feeling\" for this\ntechnical language so it's probably still quite cumbersome.\n\n\n4) Is sequential scan of the hash table in clean_cache_callback() a\ngood idea? I wonder how fast (with how many variables) it'll become\nnoticeable, but it may be good enough for now and we can add something\nbetter (tracing which variables need resetting) later.\n\n\n5) In what situation would we call clean_cache_callback() without a\ntransaction state? If that happens it seems more like a bug, so\nmaybeelog(ERROR) or Assert() would be more appropriate?\n\n\n6) free_schema_variable does not actually use the force parameter\n\n\n7) The target_exprkind expression in transformSelectStmt really needs\nsome explanation. Because that's chance you'll look at this in 6 months\nand understand what it does?\n\n target_exprkind =\n (pstate->p_expr_kind != EXPR_KIND_LET_TARGET ||\n pstate->parentParseState != NULL) ?\n EXPR_KIND_SELECT_TARGET : EXPR_KIND_LET_TARGET;\n\n\n8) immutable variables without a default value\n\nIMO this case should not be allowed. On 2021/08/29 you wrote:\n\n I thought about this case, and I have one scenario, where this\n behaviour can be useful. When the variable is declared as IMMUTABLE\n NOT NULL without not null default, then any access to the content of\n the variable has to fail. I think it can be used for detection,\n where and when the variable is first used. So this behavior is\n allowed just because I think, so this feature can be interesting for\n debugging. If this idea is too strange, I have no problem to disable\n this case.\n\nThis seems like a really strange use case. In a production code you'll\nnot do this, because then the variable is useless and the code does not\nwork at all (it'll just fail whenever it attempts to access the var).\nAnd if you can modify the code, there are other / better ways to do this\n(raising an exception, ...).\n\nSo this seems pretty useless to me, +1 to disabling it.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 3 Nov 2021 14:05:02 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "FWIW the patch was marked as RFC for about a year, but there was plenty\nof discussion / changes since then, so that seemed premature. I've\nswitched it back to WoA.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 3 Nov 2021 16:20:30 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Wed, Nov 03, 2021 at 02:05:02PM +0100, Tomas Vondra wrote:\n> 3) I've reworded / tweaked a couple places in the docs, but this really\n> needs a native speaker - I don't have a very good \"feeling\" for this\n> technical language so it's probably still quite cumbersome.\n\nOn Daniel's suggestion, I have reviewed the docs, and then proofread the rest\nof the patch. My amendments are in 0003.\n\n-- \nJustin", "msg_date": "Fri, 5 Nov 2021 20:39:04 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nst 3. 11. 2021 v 14:05 odesílatel Tomas Vondra <\ntomas.vondra@enterprisedb.com> napsal:\n\n> Hi,\n>\n> I took a quick look at the latest patch version. In general the patch\n> looks pretty complete and clean, and for now I have only some basic\n> comments. The attached patch tweaks some of this, along with a couple\n> additional minor changes that I'll not discuss here.\n>\n>\n> 1) Not sure why we need to call this \"schema variables\". Most objects\n> are placed in a schema, and we don't say \"schema tables\" for example.\n> And it's CREATE VARIABLE and not CREATE SCHEMA VARIABLE, so it's a bit\n> inconsistent.\n>\n\nYes, there is inconsistency, but I think it is necessary. The name\n\"variable\" is too generic. Theoretically we can use other adjectives like\nsession variables or global variables and the name will be valid. But it\ndoesn't describe the fundamentals of design. This is similar to the\npackage's variables from PL/SQL. These variables are global, session's\nvariables too. But the usual name is \"package variables\". So schema\nvariables are assigned to schemes, and I think a good name can be \"schema\nvariables\". But it is not necessary to repeat keyword schema in the CREATE\nCOMMAND.\n\nMy opinion is not too strong in this case, and I can accept just\n\"variables\" or \"session's variables\" or \"global variables\", but I am not\nsure if these names describe this feature well, because still they are too\ngeneric. There are too many different implementations of session global\nvariables (see PL/SQL or T-SQL or DB2).\n\n\n> The docs actually use \"Global variables\" in one place for some reason.\n>\n>\n> 2) I find this a bit confusing:\n>\n> SELECT non_existent_variable;\n> test=# select s;\n> ERROR: column \"non_existent_variable\" does not exist\n> LINE 1: select non_existent_variable;\n>\n> I wonder if this means using SELECT to read variables is a bad idea, and\n> we should have a separate command, just like we have LET (instead of\n> just using UPDATE in some way).\n>\n\nI am sure so I want to use variables in SELECTs. One interesting case is\nusing variables in RLS.\n\nI prefer to fix this error message to \"column or variable ... does not\nexist\"\n\n\n>\n>\n> 3) I've reworded / tweaked a couple places in the docs, but this really\n> needs a native speaker - I don't have a very good \"feeling\" for this\n> technical language so it's probably still quite cumbersome.\n>\n>\n> 4) Is sequential scan of the hash table in clean_cache_callback() a\n> good idea? I wonder how fast (with how many variables) it'll become\n> noticeable, but it may be good enough for now and we can add something\n> better (tracing which variables need resetting) later.\n>\n>\nI have to check it.\n\n\n>\n> 5) In what situation would we call clean_cache_callback() without a\n> transaction state? If that happens it seems more like a bug, so\n> maybeelog(ERROR) or Assert() would be more appropriate?\n>\n\n\n\n>\n>\n> 6) free_schema_variable does not actually use the force parameter\n>\n>\n> 7) The target_exprkind expression in transformSelectStmt really needs\n> some explanation. Because that's chance you'll look at this in 6 months\n> and understand what it does?\n>\n> target_exprkind =\n> (pstate->p_expr_kind != EXPR_KIND_LET_TARGET ||\n> pstate->parentParseState != NULL) ?\n> EXPR_KIND_SELECT_TARGET : EXPR_KIND_LET_TARGET;\n>\n>\n> 8) immutable variables without a default value\n>\n> IMO this case should not be allowed. On 2021/08/29 you wrote:\n>\n> I thought about this case, and I have one scenario, where this\n> behaviour can be useful. When the variable is declared as IMMUTABLE\n> NOT NULL without not null default, then any access to the content of\n> the variable has to fail. I think it can be used for detection,\n> where and when the variable is first used. So this behavior is\n> allowed just because I think, so this feature can be interesting for\n> debugging. If this idea is too strange, I have no problem to disable\n> this case.\n>\n> This seems like a really strange use case. In a production code you'll\n> not do this, because then the variable is useless and the code does not\n> work at all (it'll just fail whenever it attempts to access the var).\n> And if you can modify the code, there are other / better ways to do this\n> (raising an exception, ...).\n>\n> So this seems pretty useless to me, +1 to disabling it.\n>\n\nI'll disable it.\n\n\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nHist 3. 11. 2021 v 14:05 odesílatel Tomas Vondra <tomas.vondra@enterprisedb.com> napsal:Hi,\n\nI took a quick look at the latest patch version. In general the patch\nlooks pretty complete and clean, and for now I have only some basic\ncomments. The attached patch tweaks some of this, along with a couple\nadditional minor changes that I'll not discuss here.\n\n\n1) Not sure why we need to call this \"schema variables\". Most objects\nare placed in a schema, and we don't say \"schema tables\" for example.\nAnd it's CREATE VARIABLE and not CREATE SCHEMA VARIABLE, so it's a bit\ninconsistent.Yes, there is inconsistency, but I think it is necessary. The name \"variable\" is too generic. Theoretically we can use other adjectives like session variables or global variables and the name will be valid. But it doesn't describe the fundamentals of design. This is similar to the package's variables from PL/SQL. These variables are global, session's variables too. But the usual name is \"package variables\". So schema variables are assigned to schemes, and I think a good name can be \"schema variables\". But it is not necessary to repeat keyword schema in the CREATE COMMAND. My opinion is not too strong in this case, and I can accept just \"variables\" or \"session's variables\" or \"global variables\", but I am not sure if these names describe this feature well, because still they are too generic. There are too many different implementations of session global variables (see PL/SQL or T-SQL or DB2).\n\nThe docs actually use \"Global variables\" in one place for some reason.\n\n\n2) I find this a bit confusing:\n\nSELECT non_existent_variable;\ntest=# select s;\nERROR:  column \"non_existent_variable\" does not exist\nLINE 1: select non_existent_variable;\n\nI wonder if this means using SELECT to read variables is a bad idea, and\nwe should have a separate command, just like we have LET (instead of\njust using UPDATE in some way).I am sure so I want to use variables in SELECTs. One interesting case is using variables in RLS.I prefer to fix this error message to \"column or variable ... does not exist\" \n\n\n3) I've reworded / tweaked a couple places in the docs, but this really\nneeds a native speaker - I don't have a very good \"feeling\" for this\ntechnical language so it's probably still quite cumbersome.\n\n\n4) Is sequential scan of the hash table  in clean_cache_callback() a\ngood idea? I wonder how fast (with how many variables) it'll become\nnoticeable, but it may be good enough for now and we can add something\nbetter (tracing which variables need resetting) later.\nI have to check it.  \n\n5) In what situation would we call clean_cache_callback() without a\ntransaction state? If that happens it seems more like a bug, so\nmaybeelog(ERROR) or Assert() would be more appropriate? \n\n\n6) free_schema_variable does not actually use the force parameter\n\n\n7) The target_exprkind expression in transformSelectStmt really needs\nsome explanation. Because that's chance you'll look at this in 6 months\nand understand what it does?\n\n    target_exprkind =\n        (pstate->p_expr_kind != EXPR_KIND_LET_TARGET ||\n         pstate->parentParseState != NULL) ?\n                    EXPR_KIND_SELECT_TARGET : EXPR_KIND_LET_TARGET;\n\n\n8) immutable variables without a default value\n\nIMO this case should not be allowed. On 2021/08/29 you wrote:\n\n    I thought about this case, and I have one scenario, where this\n    behaviour can be useful. When the variable is declared as IMMUTABLE\n    NOT NULL without not null default, then any access to the content of\n    the variable has to fail. I think it can be used for detection,\n    where and when the variable is first used. So this behavior is\n    allowed just because I think, so this feature can be interesting for\n    debugging. If this idea is too strange, I have no problem to disable\n    this case.\n\nThis seems like a really strange use case. In a production code you'll\nnot do this, because then the variable is useless and the code does not\nwork at all (it'll just fail whenever it attempts to access the var).\nAnd if you can modify the code, there are other / better ways to do this\n(raising an exception, ...).\n\nSo this seems pretty useless to me, +1 to disabling it.I'll disable it. \n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sat, 6 Nov 2021 04:45:19 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Sat, Nov 06, 2021 at 04:45:19AM +0100, Pavel Stehule wrote:\n> st 3. 11. 2021 v 14:05 odes�latel Tomas Vondra <tomas.vondra@enterprisedb.com> napsal:\n> > 1) Not sure why we need to call this \"schema variables\". Most objects\n> > are placed in a schema, and we don't say \"schema tables\" for example.\n> > And it's CREATE VARIABLE and not CREATE SCHEMA VARIABLE, so it's a bit\n> > inconsistent.\n\n+1\n\nAt least the error messages need to be consistent.\nIt doesn't make sense to have both of these:\n\n+ elog(ERROR, \"cache lookup failed for schema variable %u\", varid);\n+ elog(ERROR, \"cache lookup failed for variable %u\", varid);\n\n> Yes, there is inconsistency, but I think it is necessary. The name\n> \"variable\" is too generic. Theoretically we can use other adjectives like\n> session variables or global variables and the name will be valid. But it\n> doesn't describe the fundamentals of design. This is similar to the\n> package's variables from PL/SQL. These variables are global, session's\n> variables too. But the usual name is \"package variables\". So schema\n> variables are assigned to schemes, and I think a good name can be \"schema\n> variables\". But it is not necessary to repeat keyword schema in the CREATE\n> COMMAND.\n> \n> My opinion is not too strong in this case, and I can accept just\n> \"variables\" or \"session's variables\" or \"global variables\", but I am not\n> sure if these names describe this feature well, because still they are too\n> generic. There are too many different implementations of session global\n> variables (see PL/SQL or T-SQL or DB2).\n\nI would prefer \"session variable\".\n\nTo me, this feature seems similar to a CTE (which exists for a single\nstatement), or a temporary table (which exists for a single transaction). So\n\"session\" conveys a lot more of its meaning than \"schema\".\n\nBut don't rename everything just for me...\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 6 Nov 2021 09:57:14 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "so 6. 11. 2021 v 15:57 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Sat, Nov 06, 2021 at 04:45:19AM +0100, Pavel Stehule wrote:\n> > st 3. 11. 2021 v 14:05 odesílatel Tomas Vondra <\n> tomas.vondra@enterprisedb.com> napsal:\n> > > 1) Not sure why we need to call this \"schema variables\". Most objects\n> > > are placed in a schema, and we don't say \"schema tables\" for example.\n> > > And it's CREATE VARIABLE and not CREATE SCHEMA VARIABLE, so it's a bit\n> > > inconsistent.\n>\n> +1\n>\n> At least the error messages need to be consistent.\n> It doesn't make sense to have both of these:\n>\n> + elog(ERROR, \"cache lookup failed for schema variable %u\",\n> varid);\n> + elog(ERROR, \"cache lookup failed for variable %u\", varid);\n>\n> > Yes, there is inconsistency, but I think it is necessary. The name\n> > \"variable\" is too generic. Theoretically we can use other adjectives like\n> > session variables or global variables and the name will be valid. But it\n> > doesn't describe the fundamentals of design. This is similar to the\n> > package's variables from PL/SQL. These variables are global, session's\n> > variables too. But the usual name is \"package variables\". So schema\n> > variables are assigned to schemes, and I think a good name can be \"schema\n> > variables\". But it is not necessary to repeat keyword schema in the\n> CREATE\n> > COMMAND.\n> >\n> > My opinion is not too strong in this case, and I can accept just\n> > \"variables\" or \"session's variables\" or \"global variables\", but I am not\n> > sure if these names describe this feature well, because still they are\n> too\n> > generic. There are too many different implementations of session global\n> > variables (see PL/SQL or T-SQL or DB2).\n>\n> I would prefer \"session variable\".\n>\n> To me, this feature seems similar to a CTE (which exists for a single\n> statement), or a temporary table (which exists for a single transaction).\n> So\n> \"session\" conveys a lot more of its meaning than \"schema\".\n>\n\nIt depends on where you are looking. There are two perspectives - data and\nmetadata. And if I use data perspective, then it is session related. If I\nuse metadata perspective, then it can be persistent or temporal like\ntables. I see strong similarity with Global Temporary Tables - but I think\nnaming \"local temporary tables\" and \"global temporary tables\" can be not\nintuitive or messy for a lot of people too. Anyway, if people will try to\nfind this feature on Google, then probably use keywords \"session\nvariables\", so maybe my preference of more technical terminology is obscure\nand not practical, and the name \"session variables\" can be more practical\nfor other people. If I use the system used for GTT - then the exact name\ncan be \"Global Session Variable\". Can we use this name? Or shortly just\nSession Variables because we don't support local session variables now.\n\nWhat do you think about it?\n\nRegards\n\nPavel\n\n\n\n> But don't rename everything just for me...\n>\n> --\n> Justin\n>\n\nso 6. 11. 2021 v 15:57 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:On Sat, Nov 06, 2021 at 04:45:19AM +0100, Pavel Stehule wrote:\n> st 3. 11. 2021 v 14:05 odesílatel Tomas Vondra <tomas.vondra@enterprisedb.com> napsal:\n> > 1) Not sure why we need to call this \"schema variables\". Most objects\n> > are placed in a schema, and we don't say \"schema tables\" for example.\n> > And it's CREATE VARIABLE and not CREATE SCHEMA VARIABLE, so it's a bit\n> > inconsistent.\n\n+1\n\nAt least the error messages need to be consistent.\nIt doesn't make sense to have both of these:\n\n+               elog(ERROR, \"cache lookup failed for schema variable %u\", varid);\n+               elog(ERROR, \"cache lookup failed for variable %u\", varid);\n\n> Yes, there is inconsistency, but I think it is necessary. The name\n> \"variable\" is too generic. Theoretically we can use other adjectives like\n> session variables or global variables and the name will be valid. But it\n> doesn't describe the fundamentals of design. This is similar to the\n> package's variables from PL/SQL. These variables are global, session's\n> variables too. But the usual name is \"package variables\". So schema\n> variables are assigned to schemes, and I think a good name can be \"schema\n> variables\". But it is not necessary to repeat keyword schema in the CREATE\n> COMMAND.\n> \n> My opinion is not too strong in this case, and I can accept just\n> \"variables\" or \"session's variables\" or \"global variables\", but I am not\n> sure if these names describe this feature well, because still they are too\n> generic. There are too many different implementations of session global\n> variables (see PL/SQL or T-SQL or DB2).\n\nI would prefer \"session variable\".\n\nTo me, this feature seems similar to a CTE (which exists for a single\nstatement), or a temporary table (which exists for a single transaction).  So\n\"session\" conveys a lot more of its meaning than \"schema\".It depends on where you are looking. There are two perspectives - data and metadata. And if I use data perspective, then it is session related. If I use metadata perspective, then it can be persistent or temporal like tables. I see strong similarity with Global Temporary Tables - but I think naming \"local temporary tables\" and \"global temporary tables\" can be not intuitive or messy for a lot of people too. Anyway, if people will try to find this feature on Google, then probably use keywords \"session variables\", so maybe my preference of more technical terminology is obscure and not practical, and the name \"session variables\" can be more practical for other people. If I use the system used for GTT - then the exact name can be \"Global Session Variable\". Can we use this name? Or shortly just Session Variables because we don't support local session variables now.What do you think about it?RegardsPavel\n\nBut don't rename everything just for me...\n\n-- \nJustin", "msg_date": "Sat, 6 Nov 2021 16:40:55 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "\n\nOn 11/6/21 16:40, Pavel Stehule wrote:\n> \n> \n> so 6. 11. 2021 v 15:57 odesílatel Justin Pryzby <pryzby@telsasoft.com \n> <mailto:pryzby@telsasoft.com>> napsal:\n> \n> On Sat, Nov 06, 2021 at 04:45:19AM +0100, Pavel Stehule wrote:\n> > st 3. 11. 2021 v 14:05 odesílatel Tomas Vondra\n> <tomas.vondra@enterprisedb.com\n> <mailto:tomas.vondra@enterprisedb.com>> napsal:\n> > > 1) Not sure why we need to call this \"schema variables\". Most\n> objects\n> > > are placed in a schema, and we don't say \"schema tables\" for\n> example.\n> > > And it's CREATE VARIABLE and not CREATE SCHEMA VARIABLE, so\n> it's a bit\n> > > inconsistent.\n> \n> +1\n> \n> At least the error messages need to be consistent.\n> It doesn't make sense to have both of these:\n> \n> +               elog(ERROR, \"cache lookup failed for schema variable\n> %u\", varid);\n> +               elog(ERROR, \"cache lookup failed for variable %u\",\n> varid);\n> \n> > Yes, there is inconsistency, but I think it is necessary. The name\n> > \"variable\" is too generic. Theoretically we can use other\n> adjectives like\n> > session variables or global variables and the name will be valid.\n> But it\n> > doesn't describe the fundamentals of design. This is similar to the\n> > package's variables from PL/SQL. These variables are global,\n> session's\n> > variables too. But the usual name is \"package variables\". So schema\n> > variables are assigned to schemes, and I think a good name can be\n> \"schema\n> > variables\". But it is not necessary to repeat keyword schema in\n> the CREATE\n> > COMMAND.\n> >\n> > My opinion is not too strong in this case, and I can accept just\n> > \"variables\" or \"session's variables\" or \"global variables\", but I\n> am not\n> > sure if these names describe this feature well, because still\n> they are too\n> > generic. There are too many different implementations of session\n> global\n> > variables (see PL/SQL or T-SQL or DB2).\n> \n> I would prefer \"session variable\".\n> \n> To me, this feature seems similar to a CTE (which exists for a single\n> statement), or a temporary table (which exists for a single\n> transaction).  So\n> \"session\" conveys a lot more of its meaning than \"schema\".\n> \n> \n> It depends on where you are looking. There are two perspectives - data \n> and metadata. And if I use data perspective, then it is session related. \n> If I use metadata perspective, then it can be persistent or temporal \n> like tables.\n\nI think you mean \"temporary\" not \"temporal\". This really confused me for \na while, because temporal means \"involving time\" (e.g. a table with \nfrom/to timestamp range, etc).\n\n> I see strong similarity with Global Temporary Tables - but \n> I think naming \"local temporary tables\" and \"global temporary tables\" \n> can be not intuitive or messy for a lot of people too.\n\nRight, it's a bit like global temporary tables, in the sense that \nthere's a shared definition but local (session) state.\n\n> Anyway, if people will try to find this feature on Google, then \n> probably use keywords \"session variables\", so maybe my preference of\n> more technical terminology is obscure and not practical, and the name\n> \"session variables\" can be more practical for other people.\nHmmm, maybe.\n\n> If I use the system used for GTT - then the exact name can be \"Global\n> Session Variable\". Can we use this name? Or shortly just Session\n> Variables because we don't support local session variables now.\n\nSo a \"local variable\" would be defined just for a given session, just \nlike a temporary table? Wouldn't that have the same issues with catalog \nbloat as temporary tables?\n\nI'd probably vote for \"session variables\". We can call it local/global \nsession variables in the future, if we end up implementing that.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 7 Nov 2021 22:14:00 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On 11/6/21 04:45, Pavel Stehule wrote:\n> Hi\n> \n> st 3. 11. 2021 v 14:05 odesílatel Tomas Vondra \n> <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>> \n> napsal:\n> \n> Hi,\n> \n> I took a quick look at the latest patch version. In general the patch\n> looks pretty complete and clean, and for now I have only some basic\n> comments. The attached patch tweaks some of this, along with a couple\n> additional minor changes that I'll not discuss here.\n> \n> \n> 1) Not sure why we need to call this \"schema variables\". Most objects\n> are placed in a schema, and we don't say \"schema tables\" for example.\n> And it's CREATE VARIABLE and not CREATE SCHEMA VARIABLE, so it's a bit\n> inconsistent.\n> \n> \n> Yes, there is inconsistency, but I think it is necessary. The name \n> \"variable\" is too generic. Theoretically we can use other adjectives \n> like session variables or global variables and the name will be valid. \n> But it doesn't describe the fundamentals of design. This is similar to \n> the package's variables from PL/SQL. These variables are global, \n> session's variables too. But the usual name is \"package variables\". So \n> schema variables are assigned to schemes, and I think a good name can be \n> \"schema variables\". But it is not necessary to repeat keyword schema in \n> the CREATE COMMAND.\n> \n> My opinion is not too strong in this case, and I can accept just \n> \"variables\" or \"session's variables\" or \"global variables\", but I am not \n> sure if these names describe this feature well, because still they are \n> too generic. There are too many different implementations of session \n> global variables (see PL/SQL or T-SQL or DB2).\n> \n\nOK. \"Session variable\" seems better to me, but I'm not sure how well \nthat matches other databases. I'm not sure how much should we feel \nconstrained by naming in other databases, though.\n\n> \n> The docs actually use \"Global variables\" in one place for some reason.\n> \n> \n> 2) I find this a bit confusing:\n> \n> SELECT non_existent_variable;\n> test=# select s;\n> ERROR:  column \"non_existent_variable\" does not exist\n> LINE 1: select non_existent_variable;\n> \n> I wonder if this means using SELECT to read variables is a bad idea, and\n> we should have a separate command, just like we have LET (instead of\n> just using UPDATE in some way).\n> \n> \n> I am sure so I want to use variables in SELECTs. One interesting case is \n> using variables in RLS.\n> \n\nHow much more complicated would it be without the SELECT?\n\n> I prefer to fix this error message to \"column or variable ... does not \n> exist\"\n> \n\nNot sure it's a good idea to make the error message more ambiguous. Most \npeople won't use variables at all, and the message will be less clear \nfor them.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 7 Nov 2021 22:36:50 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "ne 7. 11. 2021 v 22:36 odesílatel Tomas Vondra <\ntomas.vondra@enterprisedb.com> napsal:\n\n> On 11/6/21 04:45, Pavel Stehule wrote:\n> > Hi\n> >\n> > st 3. 11. 2021 v 14:05 odesílatel Tomas Vondra\n> > <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>>\n> > napsal:\n> >\n> > Hi,\n> >\n> > I took a quick look at the latest patch version. In general the patch\n> > looks pretty complete and clean, and for now I have only some basic\n> > comments. The attached patch tweaks some of this, along with a couple\n> > additional minor changes that I'll not discuss here.\n> >\n> >\n> > 1) Not sure why we need to call this \"schema variables\". Most objects\n> > are placed in a schema, and we don't say \"schema tables\" for example.\n> > And it's CREATE VARIABLE and not CREATE SCHEMA VARIABLE, so it's a\n> bit\n> > inconsistent.\n> >\n> >\n> > Yes, there is inconsistency, but I think it is necessary. The name\n> > \"variable\" is too generic. Theoretically we can use other adjectives\n> > like session variables or global variables and the name will be valid.\n> > But it doesn't describe the fundamentals of design. This is similar to\n> > the package's variables from PL/SQL. These variables are global,\n> > session's variables too. But the usual name is \"package variables\". So\n> > schema variables are assigned to schemes, and I think a good name can be\n> > \"schema variables\". But it is not necessary to repeat keyword schema in\n> > the CREATE COMMAND.\n> >\n> > My opinion is not too strong in this case, and I can accept just\n> > \"variables\" or \"session's variables\" or \"global variables\", but I am not\n> > sure if these names describe this feature well, because still they are\n> > too generic. There are too many different implementations of session\n> > global variables (see PL/SQL or T-SQL or DB2).\n> >\n>\n> OK. \"Session variable\" seems better to me, but I'm not sure how well\n> that matches other databases. I'm not sure how much should we feel\n> constrained by naming in other databases, though.\n>\n\nsession variables is generic term - there are big differences already -\nT-SQL versus PL/SQL or SQL+ or DB2\n\n\n> >\n> > The docs actually use \"Global variables\" in one place for some\n> reason.\n> >\n> >\n> > 2) I find this a bit confusing:\n> >\n> > SELECT non_existent_variable;\n> > test=# select s;\n> > ERROR: column \"non_existent_variable\" does not exist\n> > LINE 1: select non_existent_variable;\n> >\n> > I wonder if this means using SELECT to read variables is a bad idea,\n> and\n> > we should have a separate command, just like we have LET (instead of\n> > just using UPDATE in some way).\n> >\n> >\n> > I am sure so I want to use variables in SELECTs. One interesting case is\n> > using variables in RLS.\n> >\n>\n> How much more complicated would it be without the SELECT?\n>\n\nIt is not too complicated, just you want to introduce SELECT2. The sense of\nsession variables is to be used. Has no sense to hold a value on a server\nwithout the possibility to use it.\n\nSession variables can be used as global variables in PL/pgSQL. If you\ncannot use it in SQL expressions, then you need to copy it to a local\nvariable, and then you can use it. That cannot work. This design is a\nreplacement of a untyped not nullable slow workaround based on GUC, there\nis a necessity to use it in SQL.\n\n\n> > I prefer to fix this error message to \"column or variable ... does not\n> > exist\"\n> >\n>\n> Not sure it's a good idea to make the error message more ambiguous. Most\n> people won't use variables at all, and the message will be less clear\n> for them.\n>\n\nYes, there is new complexity. But it is an analogy with variables in\nPL/pgSQL with all benefits and negatives. You don't want to use dynamic SQL\neverywhere you use PL/pgSQL variables.\n\nThere are more cases than RLS in SQL\n\n1. hold value in session (for interactive work or for non interactive\nscripts). Sometimes you want to reuse value - we can now use CTE or\ntemporary tables. But in this case you have to store relation, you cannot\nstore value, that can be used as a query parameter.\n\n2. allow safe and effective parametrization of SQL scripts, and copy value\nfrom client side to server side (there is not risk of SQL injection).\n\nrun script with parameter -v xx=10\n\n```\ncreate temp variable xx as int;\nset xx = :`xx`;\ndo $$\n .. -- I can work with variable xx on server side\n\n ...\n\n$$\n\nThis is complement to client side variables - the advantage is possibility\nto use outside psql, the are type, and the metadata can be permanent.\n\n3. you can share value by PL environments (and by possible clients). But\nthis sharing is secure - the rules are the same like holding value in an\ntable.\n\nSession variables increase complexity a little bit, but increases\npossibilities and comfort for developers that use databases directly. The\nanalogy with PL/pgSQL variables is well, jut you are not limited to\nPL/pgSQL scope.\n\nRegards\n\nPavel\n\n\n\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nne 7. 11. 2021 v 22:36 odesílatel Tomas Vondra <tomas.vondra@enterprisedb.com> napsal:On 11/6/21 04:45, Pavel Stehule wrote:\n> Hi\n> \n> st 3. 11. 2021 v 14:05 odesílatel Tomas Vondra \n> <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>> \n> napsal:\n> \n>     Hi,\n> \n>     I took a quick look at the latest patch version. In general the patch\n>     looks pretty complete and clean, and for now I have only some basic\n>     comments. The attached patch tweaks some of this, along with a couple\n>     additional minor changes that I'll not discuss here.\n> \n> \n>     1) Not sure why we need to call this \"schema variables\". Most objects\n>     are placed in a schema, and we don't say \"schema tables\" for example.\n>     And it's CREATE VARIABLE and not CREATE SCHEMA VARIABLE, so it's a bit\n>     inconsistent.\n> \n> \n> Yes, there is inconsistency, but I think it is necessary. The name \n> \"variable\" is too generic. Theoretically we can use other adjectives \n> like session variables or global variables and the name will be valid. \n> But it doesn't describe the fundamentals of design. This is similar to \n> the package's variables from PL/SQL. These variables are global, \n> session's variables too. But the usual name is \"package variables\". So \n> schema variables are assigned to schemes, and I think a good name can be \n> \"schema variables\". But it is not necessary to repeat keyword schema in \n> the CREATE COMMAND.\n> \n> My opinion is not too strong in this case, and I can accept just \n> \"variables\" or \"session's variables\" or \"global variables\", but I am not \n> sure if these names describe this feature well, because still they are \n> too generic. There are too many different implementations of session \n> global variables (see PL/SQL or T-SQL or DB2).\n> \n\nOK. \"Session variable\" seems better to me, but I'm not sure how well \nthat matches other databases. I'm not sure how much should we feel \nconstrained by naming in other databases, though.session variables is generic term - there are big differences already - T-SQL versus PL/SQL or SQL+ or DB2 \n\n> \n>     The docs actually use \"Global variables\" in one place for some reason.\n> \n> \n>     2) I find this a bit confusing:\n> \n>     SELECT non_existent_variable;\n>     test=# select s;\n>     ERROR:  column \"non_existent_variable\" does not exist\n>     LINE 1: select non_existent_variable;\n> \n>     I wonder if this means using SELECT to read variables is a bad idea, and\n>     we should have a separate command, just like we have LET (instead of\n>     just using UPDATE in some way).\n> \n> \n> I am sure so I want to use variables in SELECTs. One interesting case is \n> using variables in RLS.\n> \n\nHow much more complicated would it be without the SELECT?It is not too complicated, just you want to introduce SELECT2. The sense of session variables is to be used. Has no sense to hold a value on a server without the possibility to use it.Session variables can be used as global variables in PL/pgSQL. If you cannot use it in SQL expressions, then you need to copy it to a local variable, and then you can use it. That cannot work. This design is a replacement of a untyped not nullable slow workaround based on GUC, there is a necessity to use it in SQL. \n\n> I prefer to fix this error message to \"column or variable ... does not \n> exist\"\n> \n\nNot sure it's a good idea to make the error message more ambiguous. Most \npeople won't use variables at all, and the message will be less clear \nfor them. Yes, there is new complexity. But it is an analogy with variables in PL/pgSQL with all benefits and negatives. You don't want to use dynamic SQL everywhere you use PL/pgSQL variables.There are more cases than RLS in SQL1. hold value in session (for interactive work or for non interactive scripts). Sometimes you want to reuse value - we can now use CTE or temporary tables. But in this case you have to store relation, you cannot store value, that can be used as a query parameter. 2. allow safe and effective parametrization of SQL scripts, and copy value from client side to server side (there is not risk of SQL injection).run script with parameter -v xx=10```create temp variable xx as int;set xx = :`xx`;do $$  .. -- I can work with variable xx on server side  ...$$This is complement to client side variables - the advantage is possibility to use outside psql, the are type, and the metadata can be permanent.3. you can share value by PL environments (and by possible clients). But this sharing is secure - the rules are the same like holding value in an table.Session variables increase complexity a little bit, but increases possibilities and comfort for developers that use databases directly. The analogy with PL/pgSQL variables is well, jut you are not limited to PL/pgSQL scope.RegardsPavel\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 8 Nov 2021 05:02:47 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Sun, Nov 07, 2021 at 10:14:00PM +0100, Tomas Vondra wrote:\n> I'd probably vote for \"session variables\". We can call it local/global\n> session variables in the future, if we end up implementing that.\n\nBy chance, I ran into this pre-existing use of the phrase \"session variable\".\nintroduced since 8fbef1090:\n\ndoc/src/sgml/ref/set_role.sgml: <command>SET ROLE</command> does not process session variables as specified by\n\nThat's the *only* use of that phrase, but you'd have to change it to something\nlike \".. does not process role-specific variables a specified by ..\".\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 8 Nov 2021 05:19:36 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi Justin\n\nso 6. 11. 2021 v 2:39 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Wed, Nov 03, 2021 at 02:05:02PM +0100, Tomas Vondra wrote:\n> > 3) I've reworded / tweaked a couple places in the docs, but this really\n> > needs a native speaker - I don't have a very good \"feeling\" for this\n> > technical language so it's probably still quite cumbersome.\n>\n> On Daniel's suggestion, I have reviewed the docs, and then proofread the\n> rest\n> of the patch. My amendments are in 0003.\n>\n\nThank you for review and fixes, I try to complete some version for next\nwork, and looks so your patch 0001 is broken\n\ngedit reports to me broken unicode \\A0\\A0\\A0\\A0\\A0\n\nmy last patch has 276KB and your patch has 293KB?\n\nThank you\n\nPavel\n\n\n> --\n> Justin\n>\n\nHi Justinso 6. 11. 2021 v 2:39 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:On Wed, Nov 03, 2021 at 02:05:02PM +0100, Tomas Vondra wrote:\n> 3) I've reworded / tweaked a couple places in the docs, but this really\n> needs a native speaker - I don't have a very good \"feeling\" for this\n> technical language so it's probably still quite cumbersome.\n\nOn Daniel's suggestion, I have reviewed the docs, and then proofread the rest\nof the patch.  My amendments are in 0003.Thank you for review and fixes, I try to complete some version for next work, and looks so your patch 0001 is brokengedit reports to me broken unicode \\A0\\A0\\A0\\A0\\A0my last patch has 276KB and your patch has 293KB? Thank youPavel\n\n-- \nJustin", "msg_date": "Mon, 15 Nov 2021 21:00:13 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": ">\n> my last patch has 276KB and your patch has 293KB?\n>\n\nPlease, can you resend your version of patch 0001?\n\nThank you\n\nPavel\n\nmy last patch has 276KB and your patch has 293KB? Please, can you resend your version of patch 0001?Thank youPavel", "msg_date": "Mon, 15 Nov 2021 21:06:08 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Mon, Nov 15, 2021 at 09:00:13PM +0100, Pavel Stehule wrote:\n> Thank you for review and fixes, I try to complete some version for next\n> work, and looks so your patch 0001 is broken\n> \n> gedit reports to me broken unicode \\A0\\A0\\A0\\A0\\A0\n> \n> my last patch has 276KB and your patch has 293KB?\n\nOn Mon, Nov 15, 2021 at 09:06:08PM +0100, Pavel Stehule wrote:\n> >\n> > my last patch has 276KB and your patch has 293KB?\n> \n> Please, can you resend your version of patch 0001?\n\nhttps://www.postgresql.org/message-id/20211106013904.GG17618@telsasoft.com\n\n0001 is exactly your patch applied to HEAD, and 0002 are Tomas' changes\nrelative to your patch.\n\n0003 is my contribution on top. My intent is that you wouldn't apply 0001, but\nrather apply my 0003 on top of your existing branch, and then review 0002/0003,\nand then squish the changes into your patch.\n\nI see the 0xa0 stuff in your original patch before my changes, but I'm not sure\nwhat went wrong.\n\nLet me know if you have any issue applying my changes on top of your existing,\nlocal branch ?\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 15 Nov 2021 14:23:52 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "po 15. 11. 2021 v 21:23 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Mon, Nov 15, 2021 at 09:00:13PM +0100, Pavel Stehule wrote:\n> > Thank you for review and fixes, I try to complete some version for next\n> > work, and looks so your patch 0001 is broken\n> >\n> > gedit reports to me broken unicode \\A0\\A0\\A0\\A0\\A0\n> >\n> > my last patch has 276KB and your patch has 293KB?\n>\n> On Mon, Nov 15, 2021 at 09:06:08PM +0100, Pavel Stehule wrote:\n> > >\n> > > my last patch has 276KB and your patch has 293KB?\n> >\n> > Please, can you resend your version of patch 0001?\n>\n> https://www.postgresql.org/message-id/20211106013904.GG17618@telsasoft.com\n>\n> 0001 is exactly your patch applied to HEAD, and 0002 are Tomas' changes\n> relative to your patch.\n>\n> 0003 is my contribution on top. My intent is that you wouldn't apply\n> 0001, but\n> rather apply my 0003 on top of your existing branch, and then review\n> 0002/0003,\n> and then squish the changes into your patch.\n>\n> I see the 0xa0 stuff in your original patch before my changes, but I'm not\n> sure\n> what went wrong.\n>\n> Let me know if you have any issue applying my changes on top of your\n> existing,\n> local branch ?\n>\n\nIt is ok, I was able to apply all your patches to my local branch\n\nRegards\n\nPavel\n\n>\n> --\n> Justin\n>\n\npo 15. 11. 2021 v 21:23 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:On Mon, Nov 15, 2021 at 09:00:13PM +0100, Pavel Stehule wrote:\n> Thank you for review and fixes, I try to complete some version for next\n> work, and looks so your patch 0001 is broken\n> \n> gedit reports to me broken unicode \\A0\\A0\\A0\\A0\\A0\n> \n> my last patch has 276KB and your patch has 293KB?\n\nOn Mon, Nov 15, 2021 at 09:06:08PM +0100, Pavel Stehule wrote:\n> >\n> > my last patch has 276KB and your patch has 293KB?\n> \n> Please, can you resend your version of patch 0001?\n\nhttps://www.postgresql.org/message-id/20211106013904.GG17618@telsasoft.com\n\n0001 is exactly your patch applied to HEAD, and 0002 are Tomas' changes\nrelative to your patch.\n\n0003 is my contribution on top.  My intent is that you wouldn't apply 0001, but\nrather apply my 0003 on top of your existing branch, and then review 0002/0003,\nand then squish the changes into your patch.\n\nI see the 0xa0 stuff in your original patch before my changes, but I'm not sure\nwhat went wrong.\n\nLet me know if you have any issue applying my changes on top of your existing,\nlocal branch ?It is ok, I was able to apply all your patches to my local branchRegardsPavel\n\n-- \nJustin", "msg_date": "Wed, 17 Nov 2021 07:32:31 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\n8) immutable variables without a default value\n>\n> IMO this case should not be allowed. On 2021/08/29 you wrote:\n>\n> I thought about this case, and I have one scenario, where this\n> behaviour can be useful. When the variable is declared as IMMUTABLE\n> NOT NULL without not null default, then any access to the content of\n> the variable has to fail. I think it can be used for detection,\n> where and when the variable is first used. So this behavior is\n> allowed just because I think, so this feature can be interesting for\n> debugging. If this idea is too strange, I have no problem to disable\n> this case.\n>\n\nI checked code, and this case is disallowed already\n\npostgres=# CREATE IMMUTABLE VARIABLE xx AS int NOT NULL;\nERROR: IMMUTABLE NOT NULL variable requires default expression\n\nRegards\n\nPavel\n\nHi\n8) immutable variables without a default value\n\nIMO this case should not be allowed. On 2021/08/29 you wrote:\n\n    I thought about this case, and I have one scenario, where this\n    behaviour can be useful. When the variable is declared as IMMUTABLE\n    NOT NULL without not null default, then any access to the content of\n    the variable has to fail. I think it can be used for detection,\n    where and when the variable is first used. So this behavior is\n    allowed just because I think, so this feature can be interesting for\n    debugging. If this idea is too strange, I have no problem to disable\n    this case.I checked code, and this case is disallowed already postgres=# CREATE IMMUTABLE VARIABLE xx AS int NOT NULL;ERROR:  IMMUTABLE NOT NULL variable requires default expressionRegardsPavel", "msg_date": "Wed, 17 Nov 2021 17:05:57 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\npo 15. 11. 2021 v 21:23 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Mon, Nov 15, 2021 at 09:00:13PM +0100, Pavel Stehule wrote:\n> > Thank you for review and fixes, I try to complete some version for next\n> > work, and looks so your patch 0001 is broken\n> >\n> > gedit reports to me broken unicode \\A0\\A0\\A0\\A0\\A0\n> >\n> > my last patch has 276KB and your patch has 293KB?\n>\n> On Mon, Nov 15, 2021 at 09:06:08PM +0100, Pavel Stehule wrote:\n> > >\n> > > my last patch has 276KB and your patch has 293KB?\n> >\n> > Please, can you resend your version of patch 0001?\n>\n> https://www.postgresql.org/message-id/20211106013904.GG17618@telsasoft.com\n>\n> 0001 is exactly your patch applied to HEAD, and 0002 are Tomas' changes\n> relative to your patch.\n>\n> 0003 is my contribution on top. My intent is that you wouldn't apply\n> 0001, but\n> rather apply my 0003 on top of your existing branch, and then review\n> 0002/0003,\n> and then squish the changes into your patch.\n>\n> I see the 0xa0 stuff in your original patch before my changes, but I'm not\n> sure\n> what went wrong.\n>\n> Let me know if you have any issue applying my changes on top of your\n> existing,\n> local branch ?\n>\n\nI am sending new versions of patches.\n\nI hope I solved all Tomas's objections.\n\n1. The schema variables were renamed to session variables\n2. I fixed issues related to creating, dropping variables under\nsubtransactions + regress tests\n3. I fixed issues in pg_dump + regress tests\n\nRegards\n\nPavel\n\n\n> --\n> Justin\n>", "msg_date": "Sun, 19 Dec 2021 07:23:27 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Op 19-12-2021 om 07:23 schreef Pavel Stehule:\n\n> \n> I am sending new versions of patches.\n> \n> I hope I solved all Tomas's objections.\n> \n> 1. The schema variables were renamed to session variables\n> 2. I fixed issues related to creating, dropping variables under \n> subtransactions + regress tests\n> 3. I fixed issues in pg_dump + regress tests\n> \n\n > [0001-schema-variables-20211219.patch]\n > [0002-schema-variables-20211219.patch]\n\nHi Pavel,\n\nI get an error during test 'session_variables'.\n\n(on the upside, my own little testsuite runs without error)\n\nthanks,\n\nErik Rijkers", "msg_date": "Sun, 19 Dec 2021 08:09:29 +0100", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "ne 19. 12. 2021 v 8:09 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n\n> Op 19-12-2021 om 07:23 schreef Pavel Stehule:\n>\n> >\n> > I am sending new versions of patches.\n> >\n> > I hope I solved all Tomas's objections.\n> >\n> > 1. The schema variables were renamed to session variables\n> > 2. I fixed issues related to creating, dropping variables under\n> > subtransactions + regress tests\n> > 3. I fixed issues in pg_dump + regress tests\n> >\n>\n> > [0001-schema-variables-20211219.patch]\n> > [0002-schema-variables-20211219.patch]\n>\n> Hi Pavel,\n>\n> I get an error during test 'session_variables'.\n>\n> (on the upside, my own little testsuite runs without error)\n>\n> thanks,\n>\n\nplease, can you send me regress diff?\n\nRegards\n\nPavel\n\n\n\n> Erik Rijkers\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n\nne 19. 12. 2021 v 8:09 odesílatel Erik Rijkers <er@xs4all.nl> napsal:Op 19-12-2021 om 07:23 schreef Pavel Stehule:\n\n> \n> I am sending new versions of patches.\n> \n> I hope I solved all Tomas's objections.\n> \n> 1. The schema variables were renamed to session variables\n> 2. I fixed issues related to creating, dropping variables under \n> subtransactions + regress tests\n> 3. I fixed issues in pg_dump + regress tests\n> \n\n > [0001-schema-variables-20211219.patch]\n > [0002-schema-variables-20211219.patch]\n\nHi Pavel,\n\nI get an error during test 'session_variables'.\n\n(on the upside, my own little testsuite runs without error)\n\nthanks,please, can you send me regress diff?RegardsPavel \n\nErik Rijkers", "msg_date": "Sun, 19 Dec 2021 08:13:59 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "ne 19. 12. 2021 v 8:13 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> ne 19. 12. 2021 v 8:09 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n>\n>> Op 19-12-2021 om 07:23 schreef Pavel Stehule:\n>>\n>> >\n>> > I am sending new versions of patches.\n>> >\n>> > I hope I solved all Tomas's objections.\n>> >\n>> > 1. The schema variables were renamed to session variables\n>> > 2. I fixed issues related to creating, dropping variables under\n>> > subtransactions + regress tests\n>> > 3. I fixed issues in pg_dump + regress tests\n>> >\n>>\n>> > [0001-schema-variables-20211219.patch]\n>> > [0002-schema-variables-20211219.patch]\n>>\n>> Hi Pavel,\n>>\n>> I get an error during test 'session_variables'.\n>>\n>> (on the upside, my own little testsuite runs without error)\n>>\n>> thanks,\n>>\n>\n> please, can you send me regress diff?\n>\n\nI see the problem now, the test contains username, and that is wrong.\n\nSchema | Name | Type | Is nullable | Is mutable | Default | Owner |\nTransactional end action | Access privileges | Description\n-----------+------+---------+-------------+------------+---------+-------+--------------------------+------------------------+-------------\n- svartest | var1 | numeric | t | t | | pavel | | pavel=SW/pavel +|\n- | | | | | | | | var_test_role=SW/pavel |\n+----------+------+---------+-------------+------------+---------+----------+--------------------------+---------------------------+-------------\n+ svartest | var1 | numeric | t | t | | appveyor | | appveyor=SW/appveyor\n+|\n+ | | | | | | | | var_test_role=SW/appveyor |\n(1 row)\nREVOKE ALL ON VARIABLE var1 FROM var_test_role;\n\nI have to remove this test\n\nPavel\n\nRegards\n>\n> Pavel\n>\n>\n>\n>> Erik Rijkers\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n\nne 19. 12. 2021 v 8:13 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:ne 19. 12. 2021 v 8:09 odesílatel Erik Rijkers <er@xs4all.nl> napsal:Op 19-12-2021 om 07:23 schreef Pavel Stehule:\n\n> \n> I am sending new versions of patches.\n> \n> I hope I solved all Tomas's objections.\n> \n> 1. The schema variables were renamed to session variables\n> 2. I fixed issues related to creating, dropping variables under \n> subtransactions + regress tests\n> 3. I fixed issues in pg_dump + regress tests\n> \n\n > [0001-schema-variables-20211219.patch]\n > [0002-schema-variables-20211219.patch]\n\nHi Pavel,\n\nI get an error during test 'session_variables'.\n\n(on the upside, my own little testsuite runs without error)\n\nthanks,please, can you send me regress diff?I see the problem now, the test contains username, and that is wrong. Schema | Name | Type | Is nullable | Is mutable | Default | Owner | Transactional end action | Access privileges | Description -----------+------+---------+-------------+------------+---------+-------+--------------------------+------------------------+-------------- svartest | var1 | numeric | t | t | | pavel | | pavel=SW/pavel +| - | | | | | | | | var_test_role=SW/pavel | +----------+------+---------+-------------+------------+---------+----------+--------------------------+---------------------------+-------------+ svartest | var1 | numeric | t | t | | appveyor | | appveyor=SW/appveyor +| + | | | | | | | | var_test_role=SW/appveyor | (1 row) REVOKE ALL ON VARIABLE var1 FROM var_test_role;I have to remove this testPavelRegardsPavel \n\nErik Rijkers", "msg_date": "Sun, 19 Dec 2021 08:17:30 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Op 19-12-2021 om 08:13 schreef Pavel Stehule:\n> \n> \n> ne 19. 12. 2021 v 8:09 odesílatel Erik Rijkers <er@xs4all.nl \n> >\n> \n>  > [0001-schema-variables-20211219.patch]\n>  > [0002-schema-variables-20211219.patch]\n> \n> Hi Pavel,\n> \n> I get an error during test 'session_variables'.\n> \n> (on the upside, my own little testsuite runs without error)\n> \n> thanks,\n> \n> \n> please, can you send me regress diff?\n> \n\nI did attach it but if you did not receive it, see also cfbot, especially\n\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.156992\n\n\nErik\n\n\n\n", "msg_date": "Sun, 19 Dec 2021 08:23:43 +0100", "msg_from": "Erikjan Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "ne 19. 12. 2021 v 8:23 odesílatel Erikjan Rijkers <er@xs4all.nl> napsal:\n\n> Op 19-12-2021 om 08:13 schreef Pavel Stehule:\n> >\n> >\n> > ne 19. 12. 2021 v 8:09 odesílatel Erik Rijkers <er@xs4all.nl\n> > >\n> >\n> > > [0001-schema-variables-20211219.patch]\n> > > [0002-schema-variables-20211219.patch]\n> >\n> > Hi Pavel,\n> >\n> > I get an error during test 'session_variables'.\n> >\n> > (on the upside, my own little testsuite runs without error)\n> >\n> > thanks,\n> >\n> >\n> > please, can you send me regress diff?\n> >\n>\n> I did attach it but if you did not receive it, see also cfbot, especially\n>\n>\n> https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.156992\n\n\nsecond try\n\nI removed badly written tests\n\nPavel\n\n\n>\n>\n> Erik\n>\n>", "msg_date": "Sun, 19 Dec 2021 08:53:03 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": " > [0001-schema-variables-20211219-2.patch]\n > [0002-schema-variables-20211219-2.patch]\n\nHi Pavel,\n\nYou said earlier\n > 1. The schema variables were renamed to session variable\n\nBut I still see:\n$ grep -Eic 'schema variable' postgres.html\n15\n\n(postgres.html from 'make postgres.html')\n\nSo that rename doesn't seem finished.\n\n\nErik\n\n\n\n\n\n", "msg_date": "Sun, 19 Dec 2021 11:10:52 +0100", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nne 19. 12. 2021 v 11:10 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n\n> > [0001-schema-variables-20211219-2.patch]\n> > [0002-schema-variables-20211219-2.patch]\n>\n> Hi Pavel,\n>\n> You said earlier\n> > 1. The schema variables were renamed to session variable\n>\n> But I still see:\n> $ grep -Eic 'schema variable' postgres.html\n> 15\n>\n> (postgres.html from 'make postgres.html')\n>\n> So that rename doesn't seem finished.\n>\n\nYes, I forgot some changes, and more, there was a bogus regress result\nfile. Thank you for rechecking.\n\nI am sending cleaned patches\n\nRegards\n\nPavel\n\n\n\n>\n> Erik\n>\n>\n>\n>", "msg_date": "Sun, 19 Dec 2021 19:38:29 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "I don't understand what 0002 patch does relative to the 0001 patch.\nIs 0002 to change the error messages from \"schema variables\" to \"session\nvariables\" , in a separate commit to show that the main patch doesn't change\nregression results ? Could you add commit messages ?\n\nI mentioned before that there's a pre-existing use of the phrase \"session\nvariable\", which you should change to something else:\n\norigin:doc/src/sgml/ref/set_role.sgml: <command>SET ROLE</command> does not process session variables as specified by\norigin:doc/src/sgml/ref/set_role.sgml- the role's <link linkend=\"sql-alterrole\"><command>ALTER ROLE</command></link> settings; this only happens during\norigin:doc/src/sgml/ref/set_role.sgml- login.\n\nMaybe \"session variable\" should be added to the glossary.\n\nThe new tests crash if debug_discard_caches=on.\n\n2021-12-20 16:15:44.476 CST postmaster[7478] LOG: server process (PID 7657) was terminated by signal 6: Aborted\n2021-12-20 16:15:44.476 CST postmaster[7478] DETAIL: Failed process was running: DISCARD VARIABLES;\n\nTRAP: FailedAssertion(\"sessionvars\", File: \"sessionvariable.c\", Line: 270, PID: 7657)\n\n#2 0x0000564858a4f1a8 in ExceptionalCondition (conditionName=conditionName@entry=0x564858b8626d \"sessionvars\", errorType=errorType@entry=0x564858aa700b \"FailedAssertion\", \n fileName=fileName@entry=0x564858b86234 \"sessionvariable.c\", lineNumber=lineNumber@entry=270) at assert.c:69\n#3 0x000056485874fec6 in sync_sessionvars_xact_callback (event=<optimized out>, arg=<optimized out>) at sessionvariable.c:270\n#4 sync_sessionvars_xact_callback (event=<optimized out>, arg=<optimized out>) at sessionvariable.c:253\n#5 0x000056485868030a in CallXactCallbacks (event=XACT_EVENT_PRE_COMMIT) at xact.c:3644\n#6 CommitTransaction () at xact.c:2178\n#7 0x0000564858681975 in CommitTransactionCommand () at xact.c:3043\n#8 0x000056485892b7a9 in finish_xact_command () at postgres.c:2722\n#9 0x000056485892dc5b in finish_xact_command () at postgres.c:2720\n#10 exec_simple_query () at postgres.c:1240\n#11 0x000056485892f70a in PostgresMain () at postgres.c:4498\n#12 0x000056485889a479 in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4594\n#13 BackendStartup (port=<optimized out>) at postmaster.c:4322\n#14 ServerLoop () at postmaster.c:1802\n#15 0x000056485889b47c in PostmasterMain () at postmaster.c:1474\n#16 0x00005648585c60c0 in main (argc=5, argv=0x564858e553f0) at main.c:198\n\n\n", "msg_date": "Mon, 20 Dec 2021 17:09:12 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nút 21. 12. 2021 v 0:09 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> I don't understand what 0002 patch does relative to the 0001 patch.\n> Is 0002 to change the error messages from \"schema variables\" to \"session\n> variables\" , in a separate commit to show that the main patch doesn't\n> change\n> regression results ? Could you add commit messages ?\n>\n> I mentioned before that there's a pre-existing use of the phrase \"session\n> variable\", which you should change to something else:\n>\n> origin:doc/src/sgml/ref/set_role.sgml: <command>SET ROLE</command> does\n> not process session variables as specified by\n> origin:doc/src/sgml/ref/set_role.sgml- the role's <link\n> linkend=\"sql-alterrole\"><command>ALTER ROLE</command></link> settings;\n> this only happens during\n> origin:doc/src/sgml/ref/set_role.sgml- login.\n>\n> Maybe \"session variable\" should be added to the glossary.\n>\n> The new tests crash if debug_discard_caches=on.\n>\n> 2021-12-20 16:15:44.476 CST postmaster[7478] LOG: server process (PID\n> 7657) was terminated by signal 6: Aborted\n> 2021-12-20 16:15:44.476 CST postmaster[7478] DETAIL: Failed process was\n> running: DISCARD VARIABLES;\n>\n\nHow do you inject this parameter to regress tests?\n\nRegards\n\nPavel\n\n\n> TRAP: FailedAssertion(\"sessionvars\", File: \"sessionvariable.c\", Line: 270,\n> PID: 7657)\n>\n> #2 0x0000564858a4f1a8 in ExceptionalCondition\n> (conditionName=conditionName@entry=0x564858b8626d \"sessionvars\",\n> errorType=errorType@entry=0x564858aa700b \"FailedAssertion\",\n> fileName=fileName@entry=0x564858b86234 \"sessionvariable.c\",\n> lineNumber=lineNumber@entry=270) at assert.c:69\n> #3 0x000056485874fec6 in sync_sessionvars_xact_callback (event=<optimized\n> out>, arg=<optimized out>) at sessionvariable.c:270\n> #4 sync_sessionvars_xact_callback (event=<optimized out>, arg=<optimized\n> out>) at sessionvariable.c:253\n> #5 0x000056485868030a in CallXactCallbacks (event=XACT_EVENT_PRE_COMMIT)\n> at xact.c:3644\n> #6 CommitTransaction () at xact.c:2178\n> #7 0x0000564858681975 in CommitTransactionCommand () at xact.c:3043\n> #8 0x000056485892b7a9 in finish_xact_command () at postgres.c:2722\n> #9 0x000056485892dc5b in finish_xact_command () at postgres.c:2720\n> #10 exec_simple_query () at postgres.c:1240\n> #11 0x000056485892f70a in PostgresMain () at postgres.c:4498\n> #12 0x000056485889a479 in BackendRun (port=<optimized out>,\n> port=<optimized out>) at postmaster.c:4594\n> #13 BackendStartup (port=<optimized out>) at postmaster.c:4322\n> #14 ServerLoop () at postmaster.c:1802\n> #15 0x000056485889b47c in PostmasterMain () at postmaster.c:1474\n> #16 0x00005648585c60c0 in main (argc=5, argv=0x564858e553f0) at main.c:198\n>\n\nHiút 21. 12. 2021 v 0:09 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:I don't understand what 0002 patch does relative to the 0001 patch.\nIs 0002 to change the error messages from \"schema variables\" to \"session\nvariables\" , in a separate commit to show that the main patch doesn't change\nregression results ?  Could you add commit messages ?\n\nI mentioned before that there's a pre-existing use of the phrase \"session\nvariable\", which you should change to something else:\n\norigin:doc/src/sgml/ref/set_role.sgml:   <command>SET ROLE</command> does not process session variables as specified by\norigin:doc/src/sgml/ref/set_role.sgml-   the role's <link linkend=\"sql-alterrole\"><command>ALTER ROLE</command></link> settings;  this only happens during\norigin:doc/src/sgml/ref/set_role.sgml-   login.\n\nMaybe \"session variable\" should be added to the glossary.\n\nThe new tests crash if debug_discard_caches=on.\n\n2021-12-20 16:15:44.476 CST postmaster[7478] LOG:  server process (PID 7657) was terminated by signal 6: Aborted\n2021-12-20 16:15:44.476 CST postmaster[7478] DETAIL:  Failed process was running: DISCARD VARIABLES;How do you inject this parameter to regress tests?RegardsPavel \n\nTRAP: FailedAssertion(\"sessionvars\", File: \"sessionvariable.c\", Line: 270, PID: 7657)\n\n#2  0x0000564858a4f1a8 in ExceptionalCondition (conditionName=conditionName@entry=0x564858b8626d \"sessionvars\", errorType=errorType@entry=0x564858aa700b \"FailedAssertion\", \n    fileName=fileName@entry=0x564858b86234 \"sessionvariable.c\", lineNumber=lineNumber@entry=270) at assert.c:69\n#3  0x000056485874fec6 in sync_sessionvars_xact_callback (event=<optimized out>, arg=<optimized out>) at sessionvariable.c:270\n#4  sync_sessionvars_xact_callback (event=<optimized out>, arg=<optimized out>) at sessionvariable.c:253\n#5  0x000056485868030a in CallXactCallbacks (event=XACT_EVENT_PRE_COMMIT) at xact.c:3644\n#6  CommitTransaction () at xact.c:2178\n#7  0x0000564858681975 in CommitTransactionCommand () at xact.c:3043\n#8  0x000056485892b7a9 in finish_xact_command () at postgres.c:2722\n#9  0x000056485892dc5b in finish_xact_command () at postgres.c:2720\n#10 exec_simple_query () at postgres.c:1240\n#11 0x000056485892f70a in PostgresMain () at postgres.c:4498\n#12 0x000056485889a479 in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4594\n#13 BackendStartup (port=<optimized out>) at postmaster.c:4322\n#14 ServerLoop () at postmaster.c:1802\n#15 0x000056485889b47c in PostmasterMain () at postmaster.c:1474\n#16 0x00005648585c60c0 in main (argc=5, argv=0x564858e553f0) at main.c:198", "msg_date": "Tue, 21 Dec 2021 13:29:00 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Tue, Dec 21, 2021 at 01:29:00PM +0100, Pavel Stehule wrote:\n> Hi\n> \n> �t 21. 12. 2021 v 0:09 odes�latel Justin Pryzby <pryzby@telsasoft.com>\n> napsal:\n> \n> > I don't understand what 0002 patch does relative to the 0001 patch.\n> > Is 0002 to change the error messages from \"schema variables\" to \"session\n> > variables\" , in a separate commit to show that the main patch doesn't\n> > change\n> > regression results ? Could you add commit messages ?\n> >\n> > I mentioned before that there's a pre-existing use of the phrase \"session\n> > variable\", which you should change to something else:\n> >\n> > origin:doc/src/sgml/ref/set_role.sgml: <command>SET ROLE</command> does\n> > not process session variables as specified by\n> > origin:doc/src/sgml/ref/set_role.sgml- the role's <link\n> > linkend=\"sql-alterrole\"><command>ALTER ROLE</command></link> settings;\n> > this only happens during\n> > origin:doc/src/sgml/ref/set_role.sgml- login.\n> >\n> > Maybe \"session variable\" should be added to the glossary.\n> >\n> > The new tests crash if debug_discard_caches=on.\n> >\n> > 2021-12-20 16:15:44.476 CST postmaster[7478] LOG: server process (PID\n> > 7657) was terminated by signal 6: Aborted\n> > 2021-12-20 16:15:44.476 CST postmaster[7478] DETAIL: Failed process was\n> > running: DISCARD VARIABLES;\n> \n> How do you inject this parameter to regress tests?\n\nYou can run PGOPTIONS='-c debug_invalidate_caches=1' make check\n\nI used make installcheck against a running instance where I'd used\nALTER SYSTEM SET debug_discard_caches=on.\n\nYou can also manually run psql against the .sql file itself.\n...which is a good idea since this causes the regression tests take hours.\n\nOr just add SET debug_discard_caches=on to your .sql file.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 21 Dec 2021 06:36:34 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "út 21. 12. 2021 v 13:36 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Tue, Dec 21, 2021 at 01:29:00PM +0100, Pavel Stehule wrote:\n> > Hi\n> >\n> > út 21. 12. 2021 v 0:09 odesílatel Justin Pryzby <pryzby@telsasoft.com>\n> > napsal:\n> >\n> > > I don't understand what 0002 patch does relative to the 0001 patch.\n> > > Is 0002 to change the error messages from \"schema variables\" to\n> \"session\n> > > variables\" , in a separate commit to show that the main patch doesn't\n> > > change\n> > > regression results ? Could you add commit messages ?\n> > >\n> > > I mentioned before that there's a pre-existing use of the phrase\n> \"session\n> > > variable\", which you should change to something else:\n> > >\n> > > origin:doc/src/sgml/ref/set_role.sgml: <command>SET ROLE</command>\n> does\n> > > not process session variables as specified by\n> > > origin:doc/src/sgml/ref/set_role.sgml- the role's <link\n> > > linkend=\"sql-alterrole\"><command>ALTER ROLE</command></link> settings;\n> > > this only happens during\n> > > origin:doc/src/sgml/ref/set_role.sgml- login.\n> > >\n> > > Maybe \"session variable\" should be added to the glossary.\n> > >\n> > > The new tests crash if debug_discard_caches=on.\n> > >\n> > > 2021-12-20 16:15:44.476 CST postmaster[7478] LOG: server process (PID\n> > > 7657) was terminated by signal 6: Aborted\n> > > 2021-12-20 16:15:44.476 CST postmaster[7478] DETAIL: Failed process\n> was\n> > > running: DISCARD VARIABLES;\n> >\n> > How do you inject this parameter to regress tests?\n>\n> You can run PGOPTIONS='-c debug_invalidate_caches=1' make check\n>\n> I used make installcheck against a running instance where I'd used\n> ALTER SYSTEM SET debug_discard_caches=on.\n>\n> You can also manually run psql against the .sql file itself.\n> ...which is a good idea since this causes the regression tests take hours.\n>\n> Or just add SET debug_discard_caches=on to your .sql file.\n>\n\nok thank you\n\nI'll try it.\n\n\n> --\n> Justin\n>\n\nút 21. 12. 2021 v 13:36 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:On Tue, Dec 21, 2021 at 01:29:00PM +0100, Pavel Stehule wrote:\n> Hi\n> \n> út 21. 12. 2021 v 0:09 odesílatel Justin Pryzby <pryzby@telsasoft.com>\n> napsal:\n> \n> > I don't understand what 0002 patch does relative to the 0001 patch.\n> > Is 0002 to change the error messages from \"schema variables\" to \"session\n> > variables\" , in a separate commit to show that the main patch doesn't\n> > change\n> > regression results ?  Could you add commit messages ?\n> >\n> > I mentioned before that there's a pre-existing use of the phrase \"session\n> > variable\", which you should change to something else:\n> >\n> > origin:doc/src/sgml/ref/set_role.sgml:   <command>SET ROLE</command> does\n> > not process session variables as specified by\n> > origin:doc/src/sgml/ref/set_role.sgml-   the role's <link\n> > linkend=\"sql-alterrole\"><command>ALTER ROLE</command></link> settings;\n> > this only happens during\n> > origin:doc/src/sgml/ref/set_role.sgml-   login.\n> >\n> > Maybe \"session variable\" should be added to the glossary.\n> >\n> > The new tests crash if debug_discard_caches=on.\n> >\n> > 2021-12-20 16:15:44.476 CST postmaster[7478] LOG:  server process (PID\n> > 7657) was terminated by signal 6: Aborted\n> > 2021-12-20 16:15:44.476 CST postmaster[7478] DETAIL:  Failed process was\n> > running: DISCARD VARIABLES;\n> \n> How do you inject this parameter to regress tests?\n\nYou can run PGOPTIONS='-c debug_invalidate_caches=1' make check\n\nI used make installcheck against a running instance where I'd used\nALTER SYSTEM SET debug_discard_caches=on.\n\nYou can also manually run psql against the .sql file itself.\n...which is a good idea since this causes the regression tests take hours.\n\nOr just add SET debug_discard_caches=on to your .sql file.ok thank youI'll try it.\n\n-- \nJustin", "msg_date": "Tue, 21 Dec 2021 13:39:45 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "út 21. 12. 2021 v 0:09 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> I don't understand what 0002 patch does relative to the 0001 patch.\n> Is 0002 to change the error messages from \"schema variables\" to \"session\n> variables\" , in a separate commit to show that the main patch doesn't\n> change\n> regression results ? Could you add commit messages ?\n>\n>\ndone\n\n\n> I mentioned before that there's a pre-existing use of the phrase \"session\n> variable\", which you should change to something else:\n>\n> origin:doc/src/sgml/ref/set_role.sgml: <command>SET ROLE</command> does\n> not process session variables as specified by\n> origin:doc/src/sgml/ref/set_role.sgml- the role's <link\n> linkend=\"sql-alterrole\"><command>ALTER ROLE</command></link> settings;\n> this only happens during\n> origin:doc/src/sgml/ref/set_role.sgml- login.\n>\n\nchanged\n\n\n> Maybe \"session variable\" should be added to the glossary.\n>\n\ndone\n\n\n> The new tests crash if debug_discard_caches=on.\n>\n> 2021-12-20 16:15:44.476 CST postmaster[7478] LOG: server process (PID\n> 7657) was terminated by signal 6: Aborted\n> 2021-12-20 16:15:44.476 CST postmaster[7478] DETAIL: Failed process was\n> running: DISCARD VARIABLES;\n>\n> TRAP: FailedAssertion(\"sessionvars\", File: \"sessionvariable.c\", Line: 270,\n> PID: 7657)\n>\n> #2 0x0000564858a4f1a8 in ExceptionalCondition\n> (conditionName=conditionName@entry=0x564858b8626d \"sessionvars\",\n> errorType=errorType@entry=0x564858aa700b \"FailedAssertion\",\n> fileName=fileName@entry=0x564858b86234 \"sessionvariable.c\",\n> lineNumber=lineNumber@entry=270) at assert.c:69\n> #3 0x000056485874fec6 in sync_sessionvars_xact_callback (event=<optimized\n> out>, arg=<optimized out>) at sessionvariable.c:270\n> #4 sync_sessionvars_xact_callback (event=<optimized out>, arg=<optimized\n> out>) at sessionvariable.c:253\n> #5 0x000056485868030a in CallXactCallbacks (event=XACT_EVENT_PRE_COMMIT)\n> at xact.c:3644\n> #6 CommitTransaction () at xact.c:2178\n> #7 0x0000564858681975 in CommitTransactionCommand () at xact.c:3043\n> #8 0x000056485892b7a9 in finish_xact_command () at postgres.c:2722\n> #9 0x000056485892dc5b in finish_xact_command () at postgres.c:2720\n> #10 exec_simple_query () at postgres.c:1240\n> #11 0x000056485892f70a in PostgresMain () at postgres.c:4498\n> #12 0x000056485889a479 in BackendRun (port=<optimized out>,\n> port=<optimized out>) at postmaster.c:4594\n> #13 BackendStartup (port=<optimized out>) at postmaster.c:4322\n> #14 ServerLoop () at postmaster.c:1802\n> #15 0x000056485889b47c in PostmasterMain () at postmaster.c:1474\n> #16 0x00005648585c60c0 in main (argc=5, argv=0x564858e553f0) at main.c:198\n>\n\nattached version was ok with this setting\n\nRegards\n\nPavel", "msg_date": "Wed, 22 Dec 2021 06:21:41 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nnew update - I found an error in checking ambiguous columns - the tupdesc\nwas badly released by FreeTupleDesc. I fixed this issue and did a new\nrelated regress test to cover this path.\n\nRegards\n\nNice holidays\n\nPavel", "msg_date": "Sat, 25 Dec 2021 18:20:36 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "---------- Forwarded message ---------\nOd: Pavel Stehule <pavel.stehule@gmail.com>\nDate: po 27. 12. 2021 v 5:30\nSubject: Re: Schema variables - new implementation for Postgres 15\nTo: Justin Pryzby <pryzby@telsasoft.com>\n\n\nHi\n\nne 26. 12. 2021 v 15:43 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> > > Maybe \"session variable\" should be added to the glossary.\n> >\n> > done\n>\n> + A persistent database object that holds an value in session memory.\n> + This memory is not shared across sessions, and after session end,\n> this\n> + memory (the value) is released. The access (read or write) to\n> session variables\n> + is controlled by access rigths similary to other database object\n> access rigts.\n>\n> an value => a value\n> rigths => rights\n> rigts => rights\n>\n\nfixed\n\nRegards\n\nPavel", "msg_date": "Mon, 27 Dec 2021 08:45:06 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Fwd: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nrebase\n\nRegards\n\nPavel", "msg_date": "Mon, 3 Jan 2022 08:17:44 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Wed, 3 Nov 2021 at 13:05, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>\n> 2) I find this a bit confusing:\n>\n> SELECT non_existent_variable;\n> test=# select s;\n> ERROR: column \"non_existent_variable\" does not exist\n> LINE 1: select non_existent_variable;\n>\n> I wonder if this means using SELECT to read variables is a bad idea, and\n> we should have a separate command, just like we have LET (instead of\n> just using UPDATE in some way).\n>\n\nHmm. This way of reading variables worries me for a different reason\n-- I think it makes it all too easy to break existing applications by\ninadvertently (or deliberately) defining variables that conflict with\ncolumn names referred to in existing queries.\n\nFor example, if I define a variable called \"relkind\", then psql's \\sv\nmeta-command is broken because the query it performs can't distinguish\nbetween the column and the variable.\n\nSimilarly, there's ambiguity between alias.colname and\nschema.variablename. So, for example, if I do the following:\n\nCREATE SCHEMA n;\nCREATE VARIABLE n.nspname AS int;\n\nthen lots of things are broken, including pg_dump and a number of psql\nmeta-commands. I don't think it's acceptable to make it so easy for a\nuser to break the system in this way.\n\nThose are examples that a malicious user might use, but even without\nsuch examples, I think it would be far too easy to inadvertently break\na large application by defining a variable that conflicted with a\ncolumn name you didn't know about.\n\nRegards,\nDean\n\n\n", "msg_date": "Thu, 13 Jan 2022 12:54:21 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "čt 13. 1. 2022 v 13:54 odesílatel Dean Rasheed <dean.a.rasheed@gmail.com>\nnapsal:\n\n> On Wed, 3 Nov 2021 at 13:05, Tomas Vondra <tomas.vondra@enterprisedb.com>\n> wrote:\n> >\n> > 2) I find this a bit confusing:\n> >\n> > SELECT non_existent_variable;\n> > test=# select s;\n> > ERROR: column \"non_existent_variable\" does not exist\n> > LINE 1: select non_existent_variable;\n> >\n> > I wonder if this means using SELECT to read variables is a bad idea, and\n> > we should have a separate command, just like we have LET (instead of\n> > just using UPDATE in some way).\n> >\n>\n> Hmm. This way of reading variables worries me for a different reason\n> -- I think it makes it all too easy to break existing applications by\n> inadvertently (or deliberately) defining variables that conflict with\n> column names referred to in existing queries.\n>\n> For example, if I define a variable called \"relkind\", then psql's \\sv\n> meta-command is broken because the query it performs can't distinguish\n> between the column and the variable.\n>\n> Similarly, there's ambiguity between alias.colname and\n> schema.variablename. So, for example, if I do the following:\n>\n> CREATE SCHEMA n;\n> CREATE VARIABLE n.nspname AS int;\n>\n> then lots of things are broken, including pg_dump and a number of psql\n> meta-commands. I don't think it's acceptable to make it so easy for a\n> user to break the system in this way.\n>\n> Those are examples that a malicious user might use, but even without\n> such examples, I think it would be far too easy to inadvertently break\n> a large application by defining a variable that conflicted with a\n> column name you didn't know about.\n>\n\nThis is a valid issue, and it should be solved, or reduce a risk\n\nI see two possibilities\n\na) easy solution can be implementation of other conflict strategy -\nvariables have lower priority than tables with possibility to raise\nwarnings if some identifiers are ambiguous. This is easy to implement, and\nwith warning I think there should not be some unwanted surprises for\ndevelopers. This is safe in meaning - no variable can break any query.\n\nb) harder implementation (but I long think about it) can be implementation\nof schema scope access. It can be used for implementation of schema private\nobjects. It doesn't solve the described issue, but it can reduce the risk\nof collision just for one schema.\n\nBoth possibilities can be implemented together - but the @b solution should\nbe implemented from zero - and it is more generic concept, and then I\nprefer @a\n\nDean, can @a work for you?\n\nRegards\n\nPavel\n\n\n\n> Regards,\n> Dean\n>\n\nčt 13. 1. 2022 v 13:54 odesílatel Dean Rasheed <dean.a.rasheed@gmail.com> napsal:On Wed, 3 Nov 2021 at 13:05, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>\n> 2) I find this a bit confusing:\n>\n> SELECT non_existent_variable;\n> test=# select s;\n> ERROR:  column \"non_existent_variable\" does not exist\n> LINE 1: select non_existent_variable;\n>\n> I wonder if this means using SELECT to read variables is a bad idea, and\n> we should have a separate command, just like we have LET (instead of\n> just using UPDATE in some way).\n>\n\nHmm. This way of reading variables worries me for a different reason\n-- I think it makes it all too easy to break existing applications by\ninadvertently (or deliberately) defining variables that conflict with\ncolumn names referred to in existing queries.\n\nFor example, if I define a variable called \"relkind\", then psql's \\sv\nmeta-command is broken because the query it performs can't distinguish\nbetween the column and the variable.\n\nSimilarly, there's ambiguity between alias.colname and\nschema.variablename. So, for example, if I do the following:\n\nCREATE SCHEMA n;\nCREATE VARIABLE n.nspname AS int;\n\nthen lots of things are broken, including pg_dump and a number of psql\nmeta-commands. I don't think it's acceptable to make it so easy for a\nuser to break the system in this way.\n\nThose are examples that a malicious user might use, but even without\nsuch examples, I think it would be far too easy to inadvertently break\na large application by defining a variable that conflicted with a\ncolumn name you didn't know about.This is a valid issue, and it should be solved, or reduce a riskI see two possibilitiesa) easy solution can be implementation of other conflict strategy - variables have lower priority than tables with possibility to raise warnings if some identifiers are ambiguous. This is easy to implement, and with warning I think there should not be some unwanted surprises for developers. This is safe in meaning - no variable can break any query.b) harder implementation (but I long think about it) can be implementation of schema scope access. It can be used for implementation of schema private objects. It doesn't solve the described issue, but it can reduce the risk of collision just for one schema. Both possibilities can be implemented together - but the @b solution should be implemented from zero - and it is more generic concept, and then I prefer @aDean, can @a work for you?RegardsPavel\n\nRegards,\nDean", "msg_date": "Thu, 13 Jan 2022 15:15:17 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Thu, Jan 13, 2022, at 18:24, Dean Rasheed wrote:\n> Those are examples that a malicious user might use, but even without\n> such examples, I think it would be far too easy to inadvertently break\n> a large application by defining a variable that conflicted with a\n> column name you didn't know about.\n\nI think there is also a readability problem with the non-locality of this feature.\n\nI think it would be better to have an explicit namespace for these global variables, so that when reading code, they would stand-out.\nAs a bonus, that would also solve the risk of breaking code, as you pointed out.\n\nMost code should never need any global variables at all, so in the rare occasions when they are needed, I think it's perfectly fine if some more verbose fully-qualified syntax was needed to use them, rather than to pollute the namespace and risk breaking code.\n\nI want to bring up an idea presented earlier in a different thread:\n\nHow about exploiting reserved SQL keywords followed by a dot, as special labels?\n\nThis could solve the problem with this patch, as well as the other root label patch to access function parameters.\n\nIt's an unorthodox idea, but due to legacy, I think we need to be creative, if we want a safe solution with no risk of breaking any code, which I think should be a requirement.\n\nTaking inspiration from Javascript, how about using the SQL reserved keyword \"window\"?\nIn Javascript, \"window.variableName\" means that the variable variableName declared at the global scope.\n\nFurthermore:\n\n\"from\" could be used to access function/procedure IN parameters.\n\"to\" could be used to access function OUT parameters.\n\"from\" or \"to\" could be used to access function INOUT parameters.\n\nExamples:\n\nSELECT u.user_id\nINTO to.user_id\nFROM users u\nWHERE u.username = from.username;\n\n-- After authentication, the authenticated user_id could be stored as a global variable:\nwindow.user_id := to.user_id;\n\n-- The authenticated user_id could then be used in queries that should filter on user_id:\nSELECT o.order_id\nFROM orders o\nWHERE o.user_id = window.user_id;\n\nThis would require endorsement from the SQL committee of course, otherwise we would face problems if they suddenly would introduce syntax where a reserved keyword could be followed by a dot.\n\nI think from a readability perspective, it works, since the different meanings can be distinguished by writing one in UPPERCASE and the other in lowercase.\n\n/Joel\nOn Thu, Jan 13, 2022, at 18:24, Dean Rasheed wrote:> Those are examples that a malicious user might use, but even without> such examples, I think it would be far too easy to inadvertently break> a large application by defining a variable that conflicted with a> column name you didn't know about.I think there is also a readability problem with the non-locality of this feature.I think it would be better to have an explicit namespace for these global variables, so that when reading code, they would stand-out.As a bonus, that would also solve the risk of breaking code, as you pointed out.Most code should never need any global variables at all, so in the rare occasions when they are needed, I think it's perfectly fine if some more verbose fully-qualified syntax was needed to use them, rather than to pollute the namespace and risk breaking code.I want to bring up an idea presented earlier in a different thread:How about exploiting reserved SQL keywords followed by a dot, as special labels?This could solve the problem with this patch, as well as the other root label patch to access function parameters.It's an unorthodox idea, but due to legacy, I think we need to be creative, if we want a safe solution with no risk of breaking any code, which I think should be a requirement.Taking inspiration from Javascript, how about using the SQL reserved keyword \"window\"?In Javascript, \"window.variableName\" means that the variable variableName declared at the global scope.Furthermore:\"from\" could be used to access function/procedure IN parameters.\"to\" could be used to access function OUT parameters.\"from\" or \"to\" could be used to access function INOUT parameters.Examples:SELECT u.user_idINTO to.user_idFROM users uWHERE u.username = from.username;-- After authentication, the authenticated user_id could be stored as a global variable:window.user_id := to.user_id;-- The authenticated user_id could then be used in queries that should filter on user_id:SELECT o.order_idFROM orders oWHERE o.user_id = window.user_id;This would require endorsement from the SQL committee of course, otherwise we would face problems if they suddenly would introduce syntax where a reserved keyword could be followed by a dot.I think from a readability perspective, it works, since the different meanings can be distinguished by writing one in UPPERCASE and the other in lowercase./Joel", "msg_date": "Thu, 13 Jan 2022 19:59:19 +0530", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "čt 13. 1. 2022 v 15:29 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Thu, Jan 13, 2022, at 18:24, Dean Rasheed wrote:\n> > Those are examples that a malicious user might use, but even without\n> > such examples, I think it would be far too easy to inadvertently break\n> > a large application by defining a variable that conflicted with a\n> > column name you didn't know about.\n>\n> I think there is also a readability problem with the non-locality of this\n> feature.\n>\n> I think it would be better to have an explicit namespace for these global\n> variables, so that when reading code, they would stand-out.\n> As a bonus, that would also solve the risk of breaking code, as you\n> pointed out.\n>\n> Most code should never need any global variables at all, so in the rare\n> occasions when they are needed, I think it's perfectly fine if some more\n> verbose fully-qualified syntax was needed to use them, rather than to\n> pollute the namespace and risk breaking code.\n>\n\nThere are few absolutely valid use cases\n\n1. scripting - currently used GUC instead session variables are slow, and\nwithout types\n\n2. RLS\n\n3. Migration from Oracle - although I agree, so package variables are used\nmore times badly, it used there. And only in few times is possibility to\nrefactor code when you do migration from Oracle to Postgres, and there is\nnecessity to have session variables,\n\n\n> I want to bring up an idea presented earlier in a different thread:\n>\n> How about exploiting reserved SQL keywords followed by a dot, as special\n> labels?\n>\n> This could solve the problem with this patch, as well as the other root\n> label patch to access function parameters.\n>\n> It's an unorthodox idea, but due to legacy, I think we need to be\n> creative, if we want a safe solution with no risk of breaking any code,\n> which I think should be a requirement.\n>\n> Taking inspiration from Javascript, how about using the SQL reserved\n> keyword \"window\"?\n> In Javascript, \"window.variableName\" means that the variable variableName\n> declared at the global scope.\n>\n\nI cannot imagine how the \"window\" keyword can work in SQL context. In\nJavascript \"window\" is an object - it is not a keyword, and it makes sense\nin usual Javascript context inside HTML browsers.\n\nRegards\n\nPavel\n\n\n\n>\n> Furthermore:\n>\n> \"from\" could be used to access function/procedure IN parameters.\n> \"to\" could be used to access function OUT parameters.\n> \"from\" or \"to\" could be used to access function INOUT parameters.\n>\n> Examples:\n>\n> SELECT u.user_id\n> INTO to.user_id\n> FROM users u\n> WHERE u.username = from.username;\n>\n> -- After authentication, the authenticated user_id could be stored as a\n> global variable:\n> window.user_id := to.user_id;\n>\n> -- The authenticated user_id could then be used in queries that should\n> filter on user_id:\n> SELECT o.order_id\n> FROM orders o\n> WHERE o.user_id = window.user_id;\n>\n> This would require endorsement from the SQL committee of course, otherwise\n> we would face problems if they suddenly would introduce syntax where a\n> reserved keyword could be followed by a dot.\n>\n> I think from a readability perspective, it works, since the different\n> meanings can be distinguished by writing one in UPPERCASE and the other in\n> lowercase.\n>\n> /Joel\n>\n\nčt 13. 1. 2022 v 15:29 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Thu, Jan 13, 2022, at 18:24, Dean Rasheed wrote:> Those are examples that a malicious user might use, but even without> such examples, I think it would be far too easy to inadvertently break> a large application by defining a variable that conflicted with a> column name you didn't know about.I think there is also a readability problem with the non-locality of this feature.I think it would be better to have an explicit namespace for these global variables, so that when reading code, they would stand-out.As a bonus, that would also solve the risk of breaking code, as you pointed out.Most code should never need any global variables at all, so in the rare occasions when they are needed, I think it's perfectly fine if some more verbose fully-qualified syntax was needed to use them, rather than to pollute the namespace and risk breaking code.There are few absolutely valid use cases1. scripting - currently used GUC instead session variables are slow, and without types2. RLS 3. Migration from Oracle - although I agree, so package variables are used more times badly, it used there. And only in few times is possibility to refactor code when you do migration from Oracle to Postgres, and there is necessity to have session variables,I want to bring up an idea presented earlier in a different thread:How about exploiting reserved SQL keywords followed by a dot, as special labels?This could solve the problem with this patch, as well as the other root label patch to access function parameters.It's an unorthodox idea, but due to legacy, I think we need to be creative, if we want a safe solution with no risk of breaking any code, which I think should be a requirement.Taking inspiration from Javascript, how about using the SQL reserved keyword \"window\"?In Javascript, \"window.variableName\" means that the variable variableName declared at the global scope.I cannot imagine how the \"window\" keyword can work in SQL context. In Javascript \"window\" is an object - it is not a keyword, and it makes sense in usual Javascript context inside HTML browsers.RegardsPavel Furthermore:\"from\" could be used to access function/procedure IN parameters.\"to\" could be used to access function OUT parameters.\"from\" or \"to\" could be used to access function INOUT parameters.Examples:SELECT u.user_idINTO to.user_idFROM users uWHERE u.username = from.username;-- After authentication, the authenticated user_id could be stored as a global variable:window.user_id := to.user_id;-- The authenticated user_id could then be used in queries that should filter on user_id:SELECT o.order_idFROM orders oWHERE o.user_id = window.user_id;This would require endorsement from the SQL committee of course, otherwise we would face problems if they suddenly would introduce syntax where a reserved keyword could be followed by a dot.I think from a readability perspective, it works, since the different meanings can be distinguished by writing one in UPPERCASE and the other in lowercase./Joel", "msg_date": "Thu, 13 Jan 2022 15:42:37 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Thu, Jan 13, 2022, at 20:12, Pavel Stehule wrote:\n>I cannot imagine how the \"window\" keyword can work in SQL context. In Javascript \"window\" is an object - it is not a keyword, and it makes sense in usual Javascript context inside HTML browsers.\n\nI was thinking since Javascript is by far the most known programming language, the \"window\" word would be familiar and easy to remember, but I agree, it's not perfect.\n\nHm, \"values\" would be nice, it's reserved in SQL:2016 [1] and in DB2/Mimer/MySQL/Oracle/SQL Server/Teradata [2], but unfortunately not in PostgreSQL [1], so perhaps not doable.\n\nSyntax:\n\nvalues.[schema name].[variable name]\n\n[1] https://www.postgresql.org/docs/current/sql-keywords-appendix.html\n[2] https://en.wikipedia.org/wiki/SQL_reserved_words\n\nOn Thu, Jan 13, 2022, at 20:12, Pavel Stehule wrote:>I cannot imagine how the \"window\" keyword can work in SQL context. In Javascript \"window\" is an object - it is not a keyword, and it makes sense in usual Javascript context inside HTML browsers.I was thinking since Javascript is by far the most known programming language, the \"window\" word would be familiar and easy to remember, but I agree, it's not perfect.Hm, \"values\" would be nice, it's reserved in SQL:2016 [1] and in DB2/Mimer/MySQL/Oracle/SQL Server/Teradata [2], but unfortunately not in PostgreSQL [1], so perhaps not doable.Syntax:values.[schema name].[variable name][1] https://www.postgresql.org/docs/current/sql-keywords-appendix.html[2] https://en.wikipedia.org/wiki/SQL_reserved_words", "msg_date": "Thu, 13 Jan 2022 22:30:40 +0530", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "čt 13. 1. 2022 v 18:01 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Thu, Jan 13, 2022, at 20:12, Pavel Stehule wrote:\n> >I cannot imagine how the \"window\" keyword can work in SQL context. In\n> Javascript \"window\" is an object - it is not a keyword, and it makes sense\n> in usual Javascript context inside HTML browsers.\n>\n> I was thinking since Javascript is by far the most known programming\n> language, the \"window\" word would be familiar and easy to remember, but I\n> agree, it's not perfect.\n>\n\nMainly the \"window\" is just a global variable. It is not a special keyword.\nSo the syntax object.property is usual.\n\n\n> Hm, \"values\" would be nice, it's reserved in SQL:2016 [1] and in\n> DB2/Mimer/MySQL/Oracle/SQL Server/Teradata [2], but unfortunately not in\n> PostgreSQL [1], so perhaps not doable.\n>\n> Syntax:\n>\n> values.[schema name].[variable name]\n>\n\nThis doesn't help too much. This syntax is too long. It can solve the\ndescribed issue, but only when all three parts will be required, and\nwriting every time VALUES.schemaname.variablename is not too practical. And\nif we require this three part identifier every time, then it can be used\nwith the already supported dbname.schemaname.varname. Almost all collisions\ncan be fixed by using a three part identifier. But it doesn't look too\nhandy.\n\nI like the idea of prioritizing tables over variables with warnings when\ncollision is detected. It cannot break anything. And it allows to using\nshort identifiers when there is not collision. If somebody don't want to\nany collision then can use schema \"vars\", \"values\", or what he/she likes.\nIt is near to your proposal - it is not too often so people use table alias\nlike \"values\" (although in EAV case it is possible).\n\n\n\n\n> [1] https://www.postgresql.org/docs/current/sql-keywords-appendix.html\n> [2] https://en.wikipedia.org/wiki/SQL_reserved_words\n>\n>\n\nčt 13. 1. 2022 v 18:01 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Thu, Jan 13, 2022, at 20:12, Pavel Stehule wrote:>I cannot imagine how the \"window\" keyword can work in SQL context. In Javascript \"window\" is an object - it is not a keyword, and it makes sense in usual Javascript context inside HTML browsers.I was thinking since Javascript is by far the most known programming language, the \"window\" word would be familiar and easy to remember, but I agree, it's not perfect.Mainly the \"window\" is just a global variable. It is not a special keyword. So the syntax object.property is usual.Hm, \"values\" would be nice, it's reserved in SQL:2016 [1] and in DB2/Mimer/MySQL/Oracle/SQL Server/Teradata [2], but unfortunately not in PostgreSQL [1], so perhaps not doable.Syntax:values.[schema name].[variable name]This doesn't help too much. This syntax is too long. It can solve the described issue, but only when all three parts will be required, and writing every time VALUES.schemaname.variablename is not too practical. And if we require this three part identifier every time, then it can be used with the already supported dbname.schemaname.varname. Almost all collisions can be fixed by using a three part identifier. But it doesn't look too handy. I like the idea of prioritizing tables over variables with warnings when collision is detected. It cannot break anything. And it allows to using short identifiers when there is not collision. If somebody don't want to any collision then can use schema \"vars\", \"values\", or what he/she likes. It is near to your proposal - it is not too often so people use table alias like \"values\" (although in EAV case it is possible). [1] https://www.postgresql.org/docs/current/sql-keywords-appendix.html[2] https://en.wikipedia.org/wiki/SQL_reserved_words", "msg_date": "Thu, 13 Jan 2022 18:41:27 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Thu, 13 Jan 2022 at 17:42, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> I like the idea of prioritizing tables over variables with warnings when collision is detected. It cannot break anything. And it allows to using short identifiers when there is not collision.\n\nYeah, that seems OK, as long as it's clearly documented. I don't think\na warning is necessary.\n\n(FWIW, testing with dbfiddle, that appears to match Db2's behaviour).\n\nRegards,\nDean\n\n\n", "msg_date": "Thu, 13 Jan 2022 18:23:35 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "čt 13. 1. 2022 v 19:23 odesílatel Dean Rasheed <dean.a.rasheed@gmail.com>\nnapsal:\n\n> On Thu, 13 Jan 2022 at 17:42, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> > I like the idea of prioritizing tables over variables with warnings when\n> collision is detected. It cannot break anything. And it allows to using\n> short identifiers when there is not collision.\n>\n> Yeah, that seems OK, as long as it's clearly documented. I don't think\n> a warning is necessary.\n>\n\nThe warning can be disabled by default, but I think it should be there.\nThis is a signal, so some in the database schema should be renamed. Maybe -\nsession_variables_ambiguity_warning.\n\n\n> (FWIW, testing with dbfiddle, that appears to match Db2's behaviour).\n>\n\nThank you for check\n\nRegards\n\nPavel\n\n\n> Regards,\n> Dean\n>\n\nčt 13. 1. 2022 v 19:23 odesílatel Dean Rasheed <dean.a.rasheed@gmail.com> napsal:On Thu, 13 Jan 2022 at 17:42, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> I like the idea of prioritizing tables over variables with warnings when collision is detected. It cannot break anything. And it allows to using short identifiers when there is not collision.\n\nYeah, that seems OK, as long as it's clearly documented. I don't think\na warning is necessary.The warning can be disabled by default, but I think it should be there. This is a signal, so some in the database schema should be renamed. Maybe - session_variables_ambiguity_warning. \n\n(FWIW, testing with dbfiddle, that appears to match Db2's behaviour). Thank you for checkRegardsPavel\n\nRegards,\nDean", "msg_date": "Thu, 13 Jan 2022 19:32:26 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nOn Thu, Jan 13, 2022 at 07:32:26PM +0100, Pavel Stehule wrote:\n> čt 13. 1. 2022 v 19:23 odesílatel Dean Rasheed <dean.a.rasheed@gmail.com>\n> napsal:\n> \n> > On Thu, 13 Jan 2022 at 17:42, Pavel Stehule <pavel.stehule@gmail.com>\n> > wrote:\n> > >\n> > > I like the idea of prioritizing tables over variables with warnings when\n> > collision is detected. It cannot break anything. And it allows to using\n> > short identifiers when there is not collision.\n> >\n> > Yeah, that seems OK, as long as it's clearly documented. I don't think\n> > a warning is necessary.\n\nWhat should be the behavior for a cached plan that uses a variable when a\nconflicting relation is later created? I think that it should be the same as a\nsearch_path change and the plan should be discarded.\n\n> The warning can be disabled by default, but I think it should be there.\n> This is a signal, so some in the database schema should be renamed. Maybe -\n> session_variables_ambiguity_warning.\n\nI agree that having a way to know that a variable has been bypassed can be\nuseful.\n\n> > (FWIW, testing with dbfiddle, that appears to match Db2's behaviour).\n> >\n> \n> Thank you for check\n\nDo you know what's oracle's behavior on that?\n\n\nI've been looking at the various dependency handling, and I noticed that\ncollation are ignored, while they're accepted syntax-wise:\n\n=# create collation mycollation (locale = 'fr-FR', provider = 'icu');\nCREATE COLLATION\n\n=# create variable myvariable text collate mycollation;\nCREATE VARIABLE\n\n=# select classid::regclass, objid, objsubid, refclassid::regclass, refobjid, refobjsubid from pg_depend where classid::regclass::text = 'pg_variable' or refclassid::regclass::text = 'pg_variable';\n classid | objid | objsubid | refclassid | refobjid | refobjsubid\n-------------+-------+----------+--------------+----------+-------------\n pg_variable | 16407 | 0 | pg_namespace | 2200 | 0\n(1 row)\n\n=# let myvariable = 'AA';\nLET\n\n=# select 'AA' collate \"en-x-icu\" < myvariable;\n ?column?\n----------\n f\n(1 row)\n\n=# select 'AA' collate \"en-x-icu\" < myvariable collate mycollation;\nERROR: 42P21: collation mismatch between explicit collations \"en-x-icu\" and \"mycollation\"\nLINE 1: select 'AA' collate \"en-x-icu\" < myvariable collate mycollat...\n\nSo it's missing both dependency recording for variable's collation and also\nteaching various code that variables can have a collation.\n\nIt's also missing some invalidation detection. For instance:\n\n=# create variable myval text;\nCREATE VARIABLE\n\n=# let myval = 'pg_class';\nLET\n\n=# prepare s(text) as select relname from pg_class where relname = $1 or relname = myval;\nPREPARE\n\n=# set plan_cache_mode = force_generic_plan ;\nSET\n\n=# execute s ('');\n relname\n----------\n pg_class\n(1 row)\n\n=# drop variable myval ;\nDROP VARIABLE\n\n=# create variable myval int;\nCREATE VARIABLE\n\n=# execute s ('');\nERROR: XX000: cache lookup failed for session variable 16408\n\nThe plan should have been discarded and the new plan should fail for type\nproblem.\n\nStrangely, subsequent calls don't error out:\n\n=# execute s('');\n relname\n---------\n(0 rows)\n\nBut doing an explain shows that there's a problem:\n\n=# explain execute s('');\nERROR: XX000: cache lookup failed for variable 16408\n\n\n", "msg_date": "Fri, 14 Jan 2022 10:44:02 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "pá 14. 1. 2022 v 3:44 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi,\n>\n> On Thu, Jan 13, 2022 at 07:32:26PM +0100, Pavel Stehule wrote:\n> > čt 13. 1. 2022 v 19:23 odesílatel Dean Rasheed <dean.a.rasheed@gmail.com\n> >\n> > napsal:\n> >\n> > > On Thu, 13 Jan 2022 at 17:42, Pavel Stehule <pavel.stehule@gmail.com>\n> > > wrote:\n> > > >\n> > > > I like the idea of prioritizing tables over variables with warnings\n> when\n> > > collision is detected. It cannot break anything. And it allows to using\n> > > short identifiers when there is not collision.\n> > >\n> > > Yeah, that seems OK, as long as it's clearly documented. I don't think\n> > > a warning is necessary.\n>\n> What should be the behavior for a cached plan that uses a variable when a\n> conflicting relation is later created? I think that it should be the same\n> as a\n> search_path change and the plan should be discarded.\n>\n\nThis is a more generic problem - creating a new DDL object doesn't\ninvalidate plans.\n\nhttps://www.postgresql.org/message-id/2589876.1641914327%40sss.pgh.pa.us\n\n\n\n>\n> > The warning can be disabled by default, but I think it should be there.\n> > This is a signal, so some in the database schema should be renamed.\n> Maybe -\n> > session_variables_ambiguity_warning.\n>\n> I agree that having a way to know that a variable has been bypassed can be\n> useful.\n>\n> > > (FWIW, testing with dbfiddle, that appears to match Db2's behaviour).\n> > >\n> >\n> > Thank you for check\n>\n> Do you know what's oracle's behavior on that?\n>\n>\nOracle is very different, because package variables are not visible from\nplain SQL. And change of interface invalidates dependent objects and\nrequires recompilation. So it is a little bit more sensitive. If I remember\nwell, the SQL identifiers have bigger priority than PL/SQL identifiers\n(package variables), so proposed behavior is very similar to Oracle\nbehavior too. The risk of unwanted collision is reduced (on Oracle) by\nlocal visibility of package variables, and availability of package\nvariables only in some environments.\n\n\n\n>\n> I've been looking at the various dependency handling, and I noticed that\n> collation are ignored, while they're accepted syntax-wise:\n>\n> =# create collation mycollation (locale = 'fr-FR', provider = 'icu');\n> CREATE COLLATION\n>\n> =# create variable myvariable text collate mycollation;\n> CREATE VARIABLE\n>\n> =# select classid::regclass, objid, objsubid, refclassid::regclass,\n> refobjid, refobjsubid from pg_depend where classid::regclass::text =\n> 'pg_variable' or refclassid::regclass::text = 'pg_variable';\n> classid | objid | objsubid | refclassid | refobjid | refobjsubid\n> -------------+-------+----------+--------------+----------+-------------\n> pg_variable | 16407 | 0 | pg_namespace | 2200 | 0\n> (1 row)\n>\n> =# let myvariable = 'AA';\n> LET\n>\n> =# select 'AA' collate \"en-x-icu\" < myvariable;\n> ?column?\n> ----------\n> f\n> (1 row)\n>\n> =# select 'AA' collate \"en-x-icu\" < myvariable collate mycollation;\n> ERROR: 42P21: collation mismatch between explicit collations \"en-x-icu\"\n> and \"mycollation\"\n> LINE 1: select 'AA' collate \"en-x-icu\" < myvariable collate mycollat...\n>\n> So it's missing both dependency recording for variable's collation and also\n> teaching various code that variables can have a collation.\n>\n> It's also missing some invalidation detection. For instance:\n>\n> =# create variable myval text;\n> CREATE VARIABLE\n>\n> =# let myval = 'pg_class';\n> LET\n>\n> =# prepare s(text) as select relname from pg_class where relname = $1 or\n> relname = myval;\n> PREPARE\n>\n> =# set plan_cache_mode = force_generic_plan ;\n> SET\n>\n> =# execute s ('');\n> relname\n> ----------\n> pg_class\n> (1 row)\n>\n> =# drop variable myval ;\n> DROP VARIABLE\n>\n> =# create variable myval int;\n> CREATE VARIABLE\n>\n> =# execute s ('');\n> ERROR: XX000: cache lookup failed for session variable 16408\n>\n> The plan should have been discarded and the new plan should fail for type\n> problem.\n>\n> Strangely, subsequent calls don't error out:\n>\n> =# execute s('');\n> relname\n> ---------\n> (0 rows)\n>\n> But doing an explain shows that there's a problem:\n>\n> =# explain execute s('');\n> ERROR: XX000: cache lookup failed for variable 16408\n>\n\nlooks like bug\n\nRegards\n\nPavel\n\npá 14. 1. 2022 v 3:44 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:Hi,\n\nOn Thu, Jan 13, 2022 at 07:32:26PM +0100, Pavel Stehule wrote:\n> čt 13. 1. 2022 v 19:23 odesílatel Dean Rasheed <dean.a.rasheed@gmail.com>\n> napsal:\n> \n> > On Thu, 13 Jan 2022 at 17:42, Pavel Stehule <pavel.stehule@gmail.com>\n> > wrote:\n> > >\n> > > I like the idea of prioritizing tables over variables with warnings when\n> > collision is detected. It cannot break anything. And it allows to using\n> > short identifiers when there is not collision.\n> >\n> > Yeah, that seems OK, as long as it's clearly documented. I don't think\n> > a warning is necessary.\n\nWhat should be the behavior for a cached plan that uses a variable when a\nconflicting relation is later created?  I think that it should be the same as a\nsearch_path change and the plan should be discarded.This is a more generic problem - creating a new DDL object doesn't invalidate plans.https://www.postgresql.org/message-id/2589876.1641914327%40sss.pgh.pa.us \n\n> The warning can be disabled by default, but I think it should be there.\n> This is a signal, so some in the database schema should be renamed. Maybe -\n> session_variables_ambiguity_warning.\n\nI agree that having a way to know that a variable has been bypassed can be\nuseful.\n\n> > (FWIW, testing with dbfiddle, that appears to match Db2's behaviour).\n> >\n> \n> Thank you for check\n\nDo you know what's oracle's behavior on that?\nOracle is very different, because package variables are not visible from plain SQL. And change of interface invalidates dependent objects and requires recompilation. So it is a little bit more sensitive. If I remember well, the SQL identifiers have bigger priority than PL/SQL identifiers (package variables), so proposed behavior is very similar to Oracle behavior too. The risk of unwanted collision is reduced (on Oracle) by local visibility of package variables, and availability of package variables only in some environments. \n\nI've been looking at the various dependency handling, and I noticed that\ncollation are ignored, while they're accepted syntax-wise:\n\n=# create collation mycollation (locale = 'fr-FR', provider = 'icu');\nCREATE COLLATION\n\n=# create variable myvariable text collate mycollation;\nCREATE VARIABLE\n\n=# select classid::regclass, objid, objsubid, refclassid::regclass, refobjid, refobjsubid from pg_depend where classid::regclass::text = 'pg_variable' or refclassid::regclass::text = 'pg_variable';\n   classid   | objid | objsubid |  refclassid  | refobjid | refobjsubid\n-------------+-------+----------+--------------+----------+-------------\n pg_variable | 16407 |        0 | pg_namespace |     2200 |           0\n(1 row)\n\n=# let myvariable = 'AA';\nLET\n\n=# select 'AA' collate \"en-x-icu\" < myvariable;\n ?column?\n----------\n f\n(1 row)\n\n=# select 'AA' collate \"en-x-icu\" < myvariable collate mycollation;\nERROR:  42P21: collation mismatch between explicit collations \"en-x-icu\" and \"mycollation\"\nLINE 1: select 'AA' collate \"en-x-icu\" < myvariable collate mycollat...\n\nSo it's missing both dependency recording for variable's collation and also\nteaching various code that variables can have a collation.\n\nIt's also missing some invalidation detection.  For instance:\n\n=# create variable myval text;\nCREATE VARIABLE\n\n=# let myval = 'pg_class';\nLET\n\n=# prepare s(text) as select relname from pg_class where relname = $1 or relname = myval;\nPREPARE\n\n=# set plan_cache_mode = force_generic_plan ;\nSET\n\n=# execute s ('');\n relname\n----------\n pg_class\n(1 row)\n\n=# drop variable myval ;\nDROP VARIABLE\n\n=# create variable myval int;\nCREATE VARIABLE\n\n=# execute s ('');\nERROR:  XX000: cache lookup failed for session variable 16408\n\nThe plan should have been discarded and the new plan should fail for type\nproblem.\n\nStrangely, subsequent calls don't error out:\n\n=# execute s('');\n relname\n---------\n(0 rows)\n\nBut doing an explain shows that there's a problem:\n\n=# explain execute s('');\nERROR:  XX000: cache lookup failed for variable 16408looks like bugRegardsPavel", "msg_date": "Fri, 14 Jan 2022 09:18:14 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": ">\n> For example, if I define a variable called \"relkind\", then psql's \\sv\n> meta-command is broken because the query it performs can't distinguish\n> between the column and the variable.\n>\n> If variables use : as prefix you´ll never have these conflicts.\n\nselect relkind from pg_class where relkind = :relkind\n\nFor example, if I define a variable called \"relkind\", then psql's \\sv\nmeta-command is broken because the query it performs can't distinguish\nbetween the column and the variable.If variables use : as prefix you´ll never have these conflicts.select relkind from pg_class where relkind = :relkind", "msg_date": "Fri, 14 Jan 2022 07:49:09 -0300", "msg_from": "Marcos Pegoraro <marcos@f10.com.br>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nOn Fri, Jan 14, 2022 at 07:49:09AM -0300, Marcos Pegoraro wrote:\n> >\n> > For example, if I define a variable called \"relkind\", then psql's \\sv\n> > meta-command is broken because the query it performs can't distinguish\n> > between the column and the variable.\n> >\n> If variables use : as prefix you�ll never have these conflicts.\n> \n> select relkind from pg_class where relkind = :relkind\n\nThis is already used by psql client side variables, so this is not an option.\n\n\n", "msg_date": "Fri, 14 Jan 2022 19:06:51 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "pá 14. 1. 2022 v 11:49 odesílatel Marcos Pegoraro <marcos@f10.com.br>\nnapsal:\n\n> For example, if I define a variable called \"relkind\", then psql's \\sv\n>> meta-command is broken because the query it performs can't distinguish\n>> between the column and the variable.\n>>\n>> If variables use : as prefix you´ll never have these conflicts.\n>\n> select relkind from pg_class where relkind = :relkind\n>\n\nThis syntax is used for client side variables already.\n\nThis is similar to MSSQL or MySQL philosophy. But the disadvantage of this\nmethod is the impossibility of modularization - all variables are in one\nspace (although there are nested scopes).\n\nThe different syntax disallows any collision well, it is far to what is\nmore usual standard in this area. And if we introduce special syntax, then\nthere is no way back. We cannot use :varname - this syntax is used already,\nbut we can use, theoretically, @var or $var. But, personally, I don't want\nto use it, if there is possibility to do without it. The special syntax can\nbe used maybe for direct access to function arguments, or for not\npersistent (temporal) session variables like MSSQL. There is a relatively\nbig space of functionality for session variables, and the system that I\nused is based on ANSI SQL/PSM or DB2 and it is near to Oracle. It has a lot\nof advantages for writing stored procedures. On other hand, for adhoc work\nthe session variables like MySQL (without declaration) can be handy, so I\ndon't want to use (and block) syntax that can be used for something\ndifferent.\n\n\n\n\n>\n>\n\npá 14. 1. 2022 v 11:49 odesílatel Marcos Pegoraro <marcos@f10.com.br> napsal:For example, if I define a variable called \"relkind\", then psql's \\sv\nmeta-command is broken because the query it performs can't distinguish\nbetween the column and the variable.If variables use : as prefix you´ll never have these conflicts.select relkind from pg_class where relkind = :relkindThis syntax is used for client side variables already.This is similar to MSSQL or MySQL philosophy. But the disadvantage of this method is the impossibility of modularization - all variables are in one space (although there are nested scopes).The different syntax disallows any collision well, it is far to what is more usual standard in this area. And if we introduce special syntax, then there is no way back. We cannot use :varname - this syntax is used already, but we can use, theoretically, @var or $var. But, personally, I don't want to use it, if there is possibility to do without it. The special syntax can be used maybe for direct access to function arguments, or for not persistent (temporal) session variables like MSSQL. There is a relatively big space of functionality for session variables, and the system that I used is based on ANSI SQL/PSM or DB2 and it is near to Oracle. It has a lot of advantages for writing stored procedures. On other hand, for adhoc work the session variables like MySQL (without declaration) can be handy, so I don't want to use (and block) syntax that can be used for something different.", "msg_date": "Fri, 14 Jan 2022 12:07:13 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\npá 14. 1. 2022 v 3:44 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi,\n>\n> On Thu, Jan 13, 2022 at 07:32:26PM +0100, Pavel Stehule wrote:\n> > čt 13. 1. 2022 v 19:23 odesílatel Dean Rasheed <dean.a.rasheed@gmail.com\n> >\n> > napsal:\n> >\n> > > On Thu, 13 Jan 2022 at 17:42, Pavel Stehule <pavel.stehule@gmail.com>\n> > > wrote:\n> > > >\n> > > > I like the idea of prioritizing tables over variables with warnings\n> when\n> > > collision is detected. It cannot break anything. And it allows to using\n> > > short identifiers when there is not collision.\n> > >\n> > > Yeah, that seems OK, as long as it's clearly documented. I don't think\n> > > a warning is necessary.\n>\n> What should be the behavior for a cached plan that uses a variable when a\n> conflicting relation is later created? I think that it should be the same\n> as a\n> search_path change and the plan should be discarded.\n>\n> > The warning can be disabled by default, but I think it should be there.\n> > This is a signal, so some in the database schema should be renamed.\n> Maybe -\n> > session_variables_ambiguity_warning.\n>\n> I agree that having a way to know that a variable has been bypassed can be\n> useful.\n>\n\ndone\n\n\n>\n> > > (FWIW, testing with dbfiddle, that appears to match Db2's behaviour).\n> > >\n> >\n> > Thank you for check\n>\n> Do you know what's oracle's behavior on that?\n>\n>\n> I've been looking at the various dependency handling, and I noticed that\n> collation are ignored, while they're accepted syntax-wise:\n>\n> =# \"\n> CREATE COLLATION\n>\n> =# create variable myvariable text collate mycollation;\n> CREATE VARIABLE\n>\n> =# select classid::regclass, objid, objsubid, refclassid::regclass,\n> refobjid, refobjsubid from pg_depend where classid::regclass::text =\n> 'pg_variable' or refclassid::regclass::text = 'pg_variable';\n> classid | objid | objsubid | refclassid | refobjid | refobjsubid\n> -------------+-------+----------+--------------+----------+-------------\n> pg_variable | 16407 | 0 | pg_namespace | 2200 | 0\n> (1 row)\n>\n\nfixed\n\n\n>\n> =# let myvariable = 'AA';\n> LET\n>\n> =# select 'AA' collate \"en-x-icu\" < myvariable;\n> ?column?\n> ----------\n> f\n> (1 row)\n>\n> =# select 'AA' collate \"en-x-icu\" < myvariable collate mycollation;\n> ERROR: 42P21: collation mismatch between explicit collations \"en-x-icu\"\n> and \"mycollation\"\n> LINE 1: select 'AA' collate \"en-x-icu\" < myvariable collate mycollat...\n>\n\nWhat do you expect? I don't understand collating well, but it looks\ncorrect. Minimally the tables have the same behavior.\n\ncreate collation mycollation (locale = 'fr-FR', provider = 'icu');\ncreate table foo(mycol text collate mycollation);\nselect 'AA' collate \"en-x-icu\" < mycol from foo;\n┌──────────┐\n│ ?column? │\n╞══════════╡\n│ f │\n└──────────┘\n(1 row)\n\n\npostgres=# select 'AA' collate \"en-x-icu\" < mycol collate mycollation from\nfoo;\nERROR: collation mismatch between explicit collations \"en-x-icu\" and\n\"mycollation\"\nLINE 1: select 'AA' collate \"en-x-icu\" < mycol collate mycollation f...\n ^\n\n\n\n\n> So it's missing both dependency recording for variable's collation and also\n> teaching various code that variables can have a collation.\n>\n> It's also missing some invalidation detection. For instance:\n>\n> =# create variable myval text;\n> CREATE VARIABLE\n>\n> =# let myval = 'pg_class';\n> LET\n>\n> =# prepare s(text) as select relname from pg_class where relname = $1 or\n> relname = myval;\n> PREPARE\n>\n> =# set plan_cache_mode = force_generic_plan ;\n> SET\n>\n> =# execute s ('');\n> relname\n> ----------\n> pg_class\n> (1 row)\n>\n> =# drop variable myval ;\n> DROP VARIABLE\n>\n> =# create variable myval int;\n> CREATE VARIABLE\n>\n> =# execute s ('');\n> ERROR: XX000: cache lookup failed for session variable 16408\n>\n> The plan should have been discarded and the new plan should fail for type\n> problem.\n>\n> Strangely, subsequent calls don't error out:\n>\n> =# execute s('');\n> relname\n> ---------\n> (0 rows)\n>\n> But doing an explain shows that there's a problem:\n>\n> =# explain execute s('');\n> ERROR: XX000: cache lookup failed for variable 16408\n>\n\nfixed\n\nPlease, can you check the attached patches?\n\nRegards\n\nPavel", "msg_date": "Tue, 18 Jan 2022 22:01:01 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "=# \"\n>> CREATE COLLATION\n>>\n>> =# create variable myvariable text collate mycollation;\n>> CREATE VARIABLE\n>>\n>> =# select classid::regclass, objid, objsubid, refclassid::regclass,\n>> refobjid, refobjsubid from pg_depend where classid::regclass::text =\n>> 'pg_variable' or refclassid::regclass::text = 'pg_variable';\n>> classid | objid | objsubid | refclassid | refobjid | refobjsubid\n>> -------------+-------+----------+--------------+----------+-------------\n>> pg_variable | 16407 | 0 | pg_namespace | 2200 | 0\n>> (1 row)\n>>\n>\n> fixed\n>\n>\n>>\n>> =# let myvariable = 'AA';\n>> LET\n>>\n>> =# select 'AA' collate \"en-x-icu\" < myvariable;\n>> ?column?\n>> ----------\n>> f\n>> (1 row)\n>>\n>> =# select 'AA' collate \"en-x-icu\" < myvariable collate mycollation;\n>> ERROR: 42P21: collation mismatch between explicit collations \"en-x-icu\"\n>> and \"mycollation\"\n>> LINE 1: select 'AA' collate \"en-x-icu\" < myvariable collate mycollat...\n>>\n>\n> What do you expect? I don't understand collating well, but it looks\n> correct. Minimally the tables have the same behavior.\n>\n> create collation mycollation (locale = 'fr-FR', provider = 'icu');\n> create table foo(mycol text collate mycollation);\n> select 'AA' collate \"en-x-icu\" < mycol from foo;\n> ┌──────────┐\n> │ ?column? │\n> ╞══════════╡\n> │ f │\n> └──────────┘\n> (1 row)\n>\n>\n> postgres=# select 'AA' collate \"en-x-icu\" < mycol collate mycollation from\n> foo;\n> ERROR: collation mismatch between explicit collations \"en-x-icu\" and\n> \"mycollation\"\n> LINE 1: select 'AA' collate \"en-x-icu\" < mycol collate mycollation f...\n> ^\n>\n>\nhere is second test\n\npostgres=# CREATE COLLATION nd2 (\n provider = 'icu',\n locale = '@colStrength=secondary', -- or 'und-u-ks-level2'\n deterministic = false\n);\nCREATE COLLATION\npostgres=# create variable testv as text col\n\npostgres=# create variable testv as text collate nd2;\nCREATE VARIABLE\npostgres=# let testv = 'Ahoj';\nLET\npostgres=# select testv = 'AHOJ';\n┌──────────┐\n│ ?column? │\n╞══════════╡\n│ t │\n└──────────┘\n(1 row)\n\npostgres=# select testv = 'AHOJ' collate \"default\";\n┌──────────┐\n│ ?column? │\n╞══════════╡\n│ f │\n└──────────┘\n(1 row)\n\nRegards\n\nPavel\n\n\n\n\n>\n>\n\n\n=# \"\nCREATE COLLATION\n\n=# create variable myvariable text collate mycollation;\nCREATE VARIABLE\n\n=# select classid::regclass, objid, objsubid, refclassid::regclass, refobjid, refobjsubid from pg_depend where classid::regclass::text = 'pg_variable' or refclassid::regclass::text = 'pg_variable';\n   classid   | objid | objsubid |  refclassid  | refobjid | refobjsubid\n-------------+-------+----------+--------------+----------+-------------\n pg_variable | 16407 |        0 | pg_namespace |     2200 |           0\n(1 row)fixed \n\n=# let myvariable = 'AA';\nLET\n\n=# select 'AA' collate \"en-x-icu\" < myvariable;\n ?column?\n----------\n f\n(1 row)\n\n=# select 'AA' collate \"en-x-icu\" < myvariable collate mycollation;\nERROR:  42P21: collation mismatch between explicit collations \"en-x-icu\" and \"mycollation\"\nLINE 1: select 'AA' collate \"en-x-icu\" < myvariable collate mycollat...What do you expect?  I don't understand collating well, but it looks correct. Minimally the tables have the same behavior.create collation mycollation (locale = 'fr-FR', provider = 'icu');create table foo(mycol text collate mycollation);select 'AA' collate \"en-x-icu\" < mycol from foo;┌──────────┐│ ?column? │╞══════════╡│ f        │└──────────┘(1 row)postgres=# select 'AA' collate \"en-x-icu\" < mycol collate mycollation from foo;ERROR:  collation mismatch between explicit collations \"en-x-icu\" and \"mycollation\"LINE 1: select 'AA' collate \"en-x-icu\" < mycol collate mycollation f...                                               ^here is second test postgres=# CREATE COLLATION nd2 (  provider = 'icu',  locale = '@colStrength=secondary', -- or 'und-u-ks-level2'  deterministic = false);CREATE COLLATIONpostgres=# create variable testv as text colpostgres=# create variable testv as text collate nd2;CREATE VARIABLEpostgres=# let testv = 'Ahoj';LETpostgres=# select testv = 'AHOJ';┌──────────┐│ ?column? │╞══════════╡│ t        │└──────────┘(1 row)postgres=# select testv = 'AHOJ' collate \"default\";┌──────────┐│ ?column? │╞══════════╡│ f        │└──────────┘(1 row)RegardsPavel", "msg_date": "Wed, 19 Jan 2022 06:07:37 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nOn Tue, Jan 18, 2022 at 10:01:01PM +0100, Pavel Stehule wrote:\n> p� 14. 1. 2022 v 3:44 odes�latel Julien Rouhaud <rjuju123@gmail.com> napsal:\n> \n> >\n> > =# let myvariable = 'AA';\n> > LET\n> >\n> > =# select 'AA' collate \"en-x-icu\" < myvariable;\n> > ?column?\n> > ----------\n> > f\n> > (1 row)\n> >\n> > =# select 'AA' collate \"en-x-icu\" < myvariable collate mycollation;\n> > ERROR: 42P21: collation mismatch between explicit collations \"en-x-icu\"\n> > and \"mycollation\"\n> > LINE 1: select 'AA' collate \"en-x-icu\" < myvariable collate mycollat...\n> >\n> \n> What do you expect? I don't understand collating well, but it looks\n> correct. Minimally the tables have the same behavior.\n\nIndeed, I actually didn't know that such object's collation were implicit and\ncould be overloaded without a problem as long as there's no conflict between\nall the explicit collations. So I agree that the current behavior is ok,\nincluding a correct handling for wanted conflicts:\n\n=# create variable var1 text collate \"fr-x-icu\";\nCREATE VARIABLE\n\n=# create variable var2 text collate \"en-x-icu\";\nCREATE VARIABLE\n\n=# let var1 = 'hoho';\nLET\n\n=# let var2 = 'hoho';\nLET\n\n=# select var1 < var2;\nERROR: 42P22: could not determine which collation to use for string comparison\nHINT: Use the COLLATE clause to set the collation explicitly.\n\n> Please, can you check the attached patches?\n\nAll the issue I mentioned are fixed, thanks!\n\n\nI see a few problems with the other new features added though. The new\nsession_variables_ambiguity_warning GUC is called even in contexts where it\nshouldn't apply. For instance:\n\n=# set session_variables_ambiguity_warning = 1;\nSET\n\n=# create variable v text;\nCREATE VARIABLE\n\n=# DO $$\nDECLARE v text;\nBEGIN\nv := 'test';\nRAISE NOTICE 'v: %', v;\nEND;\n$$ LANGUAGE plpgsql;\nWARNING: 42702: session variable \"v\" is shadowed by column\nLINE 1: v := 'test'\n ^\nDETAIL: The identifier can be column reference or session variable reference.\nHINT: The column reference is preferred against session variable reference.\nQUERY: v := 'test'\n\nBut this \"v := 'test'\" shouldn't be a substitute for a LET, and it indeed\ndoesn't work:\n\n=# DO $$\nBEGIN\nv := 'test';\nRAISE NOTICE 'v: %', v;\nEND;\n$$ LANGUAGE plpgsql;\nERROR: 42601: \"v\" is not a known variable\nLINE 3: v := 'test';\n\nBut the RAISE NOTICE does see the session variable (which should be the correct\nbehavior I think), so the warning should have been raised for this instruction\n(and in that case the message is incorrect, as it's not shadowing a column).\n\nAlso, the pg_dump handling emits a COLLATION option for session variables even\nfor default collation, while it should only emit it if the collation is not the\ntype's default collation. As a reference, for attributes the SQL used is:\n\n\t\t\t\t\t\t \"CASE WHEN a.attcollation <> t.typcollation \"\n\t\t\t\t\t\t \"THEN a.attcollation ELSE 0 END AS attcollation,\\n\"\n\nAlso, should \\dV or \\dV+ show the collation?\n\nAnd a few comments on the new chunks in this version of the patch (I didn't\nlook in detail at the whole patch yet):\n\n+ <para>\n+ The session variables can be overshadowed by columns in an query. When query\n+ holds identifier or qualified identifier that can be used as session variable\n+ identifier and as column identifier too, then it is used as column identifier\n+ every time. This situation can be logged by enabling configuration\n+ parameter <xref linkend=\"guc-session-variables-ambiguity-warning\"/>.\n+ </para>\n\nIs \"overshadowed\" correct? The rest of the patch only says \"shadow(ed)\".\n\nWhile at it, here's some proposition to improve the phrasing:\n\n+ The session variables can be shadowed by column references in a query. When a\n+ query contains identifiers or qualified identifiers that could be used as both\n+ a session variable identifiers and as column identifier, then the column\n+ identifier is preferred every time. Warnings can be emitted when this situation\n+ happens by enabling configuration parameter <xref\n+ linkend=\"guc-session-variables-ambiguity-warning\"/>.\n\nSimilarly, the next documentation could be rephrased to:\n\n+ When on, a warning is raised when any identifier in a query could be used as both\n+ a column identifier or a session variable identifier.\n+ The default is <literal>off</literal>.\n\n\nFew other nitpicking:\n\n+ * If we really detect collision of column and variable identifier,\n+ * then we prefer column, because we don't want to allow to break\n+ * an existing valid queries by new variable.\n\ns/an existing/existing\n\n+-- it is ambigonuous, but columns are preferred\n\nambiguous?\n\n\n@@ -369,6 +367,19 @@ VariableCreate(const char *varName,\n /* dependency on extension */\n recordDependencyOnCurrentExtension(&myself, false);\n\n+ /*\n+ * Normal dependency from a domain to its collation. We know the default\n+ * collation is pinned, so don't bother recording it.\n+ */\n+ if (OidIsValid(varCollation) &&\n+ varCollation != DEFAULT_COLLATION_OID)\n\nThe comment mentions domains rather than session variables.\n\nAnd for the initial patch, while looking around I found this comment on\nfix_alternative_subplan():\n\n@@ -1866,7 +1969,9 @@ fix_alternative_subplan(PlannerInfo *root, AlternativeSubPlan *asplan,\n * replacing Aggref nodes that should be replaced by initplan output Params,\n * choosing the best implementation for AlternativeSubPlans,\n * looking up operator opcode info for OpExpr and related nodes,\n- * and adding OIDs from regclass Const nodes into root->glob->relationOids.\n+ * and adding OIDs from regclass Const nodes into root->glob->relationOids,\n+ * and replacing PARAM_VARIABLE paramid, that is the oid of the session variable\n+ * to offset the array by query used session variables. ???\n\nI don't really understand the comment, and the \"???\" looks a bit suspicious.\nI'm assuming it's a reference to this new behavior in fix_param_node():\n\n * fix_param_node\n * Do set_plan_references processing on a Param\n+ * Collect session variables list and replace variable oid by\n+ * index to collected list.\n *\n * If it's a PARAM_MULTIEXPR, replace it with the appropriate Param from\n * root->multiexpr_params; otherwise no change is needed.\n * Just for paranoia's sake, we make a copy of the node in either case.\n+ *\n+ * If it's a PARAM_VARIABLE, then we should to calculate paramid.\n\nSome improvement on the comments would be welcome there, probably including\nsome mention to the \"glob->sessionVariables\" collected list?\n\n\n", "msg_date": "Wed, 19 Jan 2022 16:01:17 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "st 19. 1. 2022 v 9:01 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi,\n>\n> On Tue, Jan 18, 2022 at 10:01:01PM +0100, Pavel Stehule wrote:\n> > pá 14. 1. 2022 v 3:44 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n> napsal:\n> >\n> > >\n> > > =# let myvariable = 'AA';\n> > > LET\n> > >\n> > > =# select 'AA' collate \"en-x-icu\" < myvariable;\n> > > ?column?\n> > > ----------\n> > > f\n> > > (1 row)\n> > >\n> > > =# select 'AA' collate \"en-x-icu\" < myvariable collate mycollation;\n> > > ERROR: 42P21: collation mismatch between explicit collations\n> \"en-x-icu\"\n> > > and \"mycollation\"\n> > > LINE 1: select 'AA' collate \"en-x-icu\" < myvariable collate mycollat...\n> > >\n> >\n> > What do you expect? I don't understand collating well, but it looks\n> > correct. Minimally the tables have the same behavior.\n>\n> Indeed, I actually didn't know that such object's collation were implicit\n> and\n> could be overloaded without a problem as long as there's no conflict\n> between\n> all the explicit collations. So I agree that the current behavior is ok,\n> including a correct handling for wanted conflicts:\n>\n> =# create variable var1 text collate \"fr-x-icu\";\n> CREATE VARIABLE\n>\n> =# create variable var2 text collate \"en-x-icu\";\n> CREATE VARIABLE\n>\n> =# let var1 = 'hoho';\n> LET\n>\n> =# let var2 = 'hoho';\n> LET\n>\n> =# select var1 < var2;\n> ERROR: 42P22: could not determine which collation to use for string\n> comparison\n> HINT: Use the COLLATE clause to set the collation explicitly.\n>\n> > Please, can you check the attached patches?\n>\n> All the issue I mentioned are fixed, thanks!\n>\n>\nthank you for check\n\n\n>\n> I see a few problems with the other new features added though. The new\n> session_variables_ambiguity_warning GUC is called even in contexts where it\n> shouldn't apply. For instance:\n>\n> =# set session_variables_ambiguity_warning = 1;\n> SET\n>\n> =# create variable v text;\n> CREATE VARIABLE\n>\n> =# DO $$\n> DECLARE v text;\n> BEGIN\n> v := 'test';\n> RAISE NOTICE 'v: %', v;\n> END;\n> $$ LANGUAGE plpgsql;\n> WARNING: 42702: session variable \"v\" is shadowed by column\n> LINE 1: v := 'test'\n> ^\n> DETAIL: The identifier can be column reference or session variable\n> reference.\n> HINT: The column reference is preferred against session variable\n> reference.\n> QUERY: v := 'test'\n>\n> But this \"v := 'test'\" shouldn't be a substitute for a LET, and it indeed\n> doesn't work:\n>\n\nYes, there are some mistakes (bugs). The PLpgSQL assignment as target\nshould not see session variables, so warning is nonsense there. RAISE\nNOTICE should use local variables, and in this case is a question if we\nshould raise a warning. There are two possible analogies - we can see\nsession variables like global variables, and then the warning should not be\nraised, or we can see relation between session variables and plpgsql\nvariables similar like session variables and some with higher priority, and\nthen warning should be raised. If we want to ensure that the new session\nvariable doesn't break code, then session variables should have lower\npriority than plpgsql variables too. And because the plpgsql protection\nagainst collision cannot be used, then I prefer raising the warning.\n\nPLpgSQL assignment should not be in collision with session variables ever\n\n>\n> =# DO $$\n> BEGIN\n> v := 'test';\n> RAISE NOTICE 'v: %', v;\n> END;\n> $$ LANGUAGE plpgsql;\n> ERROR: 42601: \"v\" is not a known variable\n> LINE 3: v := 'test';\n>\n> But the RAISE NOTICE does see the session variable (which should be the\n> correct\n> behavior I think), so the warning should have been raised for this\n> instruction\n> (and in that case the message is incorrect, as it's not shadowing a\n> column).\n>\n\nIn this case I can detect node type, and I can identify external param\nnode, but I cannot to detect if this code was executed from PLpgSQL or from\nsome other\n\nSo I can to modify warning text to some\n\nDETAIL: The identifier can be column reference or query parameter or\nsession variable reference.\nHINT: The column reference and query parameter is preferred against\nsession variable reference.\n\nI cannot to use term \"plpgsql variable\" becase I cannot to ensure validity\nof this message\n\nMaybe is better to don't talk about source of this issue, and just talk\nabout result - so the warning text should be just\n\nMESSAGE: \"session variable \\\"xxxx\\\" is shadowed\nDETAIL: \"session variables can be shadowed by columns, routine's variables\nand routine's arguments with same name\"\n\nIs it better?\n\nst 19. 1. 2022 v 9:01 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:Hi,\n\nOn Tue, Jan 18, 2022 at 10:01:01PM +0100, Pavel Stehule wrote:\n> pá 14. 1. 2022 v 3:44 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n> \n> >\n> > =# let myvariable = 'AA';\n> > LET\n> >\n> > =# select 'AA' collate \"en-x-icu\" < myvariable;\n> >  ?column?\n> > ----------\n> >  f\n> > (1 row)\n> >\n> > =# select 'AA' collate \"en-x-icu\" < myvariable collate mycollation;\n> > ERROR:  42P21: collation mismatch between explicit collations \"en-x-icu\"\n> > and \"mycollation\"\n> > LINE 1: select 'AA' collate \"en-x-icu\" < myvariable collate mycollat...\n> >\n> \n> What do you expect?  I don't understand collating well, but it looks\n> correct. Minimally the tables have the same behavior.\n\nIndeed, I actually didn't know that such object's collation were implicit and\ncould be overloaded without a problem as long as there's no conflict between\nall the explicit collations.  So I agree that the current behavior is ok,\nincluding a correct handling for wanted conflicts:\n\n=# create variable var1 text collate \"fr-x-icu\";\nCREATE VARIABLE\n\n=# create variable var2 text collate \"en-x-icu\";\nCREATE VARIABLE\n\n=# let var1 = 'hoho';\nLET\n\n=# let var2 = 'hoho';\nLET\n\n=# select var1 < var2;\nERROR:  42P22: could not determine which collation to use for string comparison\nHINT:  Use the COLLATE clause to set the collation explicitly.\n\n> Please, can you check the attached patches?\n\nAll the issue I mentioned are fixed, thanks!\nthank you for check \n\nI see a few problems with the other new features added though.  The new\nsession_variables_ambiguity_warning GUC is called even in contexts where it\nshouldn't apply.  For instance:\n\n=# set session_variables_ambiguity_warning = 1;\nSET\n\n=# create variable v text;\nCREATE VARIABLE\n\n=# DO $$\nDECLARE v text;\nBEGIN\nv := 'test';\nRAISE NOTICE 'v: %', v;\nEND;\n$$ LANGUAGE plpgsql;\nWARNING:  42702: session variable \"v\" is shadowed by column\nLINE 1: v := 'test'\n        ^\nDETAIL:  The identifier can be column reference or session variable reference.\nHINT:  The column reference is preferred against session variable reference.\nQUERY:  v := 'test'\n\nBut this \"v := 'test'\" shouldn't be a substitute for a LET, and it indeed\ndoesn't work:Yes, there are some mistakes (bugs). The PLpgSQL assignment as target should not see session variables, so warning is nonsense there. RAISE NOTICE should use local variables, and in this case is a question if we should raise a warning. There are two possible analogies - we can see session variables like global variables, and then the warning should not be raised, or we can see relation between session variables and plpgsql variables similar like session variables and some with higher priority, and then warning should be raised. If we want to ensure that the new session variable doesn't break code, then session variables should have lower priority than plpgsql variables too. And because the plpgsql protection against collision cannot  be used, then I prefer raising the warning.  PLpgSQL assignment should not be in collision with session variables ever \n\n=# DO $$\nBEGIN\nv := 'test';\nRAISE NOTICE 'v: %', v;\nEND;\n$$ LANGUAGE plpgsql;\nERROR:  42601: \"v\" is not a known variable\nLINE 3: v := 'test';\n\nBut the RAISE NOTICE does see the session variable (which should be the correct\nbehavior I think), so the warning should have been raised for this instruction\n(and in that case the message is incorrect, as it's not shadowing a column).In this case I can detect node type, and I can identify external param node, but I cannot to detect if this code was executed from PLpgSQL or from some otherSo I can to modify warning text to some DETAIL:  The identifier can be column reference or query parameter or session variable reference.\nHINT:  The column reference and query parameter is preferred against session variable reference.I cannot to use term \"plpgsql variable\" becase I cannot to ensure validity of this messageMaybe is better to don't talk about source of this issue, and just talk about result - so the warning text should be justMESSAGE: \"session variable \\\"xxxx\\\" is shadowedDETAIL: \"session variables can be shadowed by columns, routine's variables and routine's arguments with same name\"Is it better?", "msg_date": "Wed, 19 Jan 2022 21:09:41 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nOn Wed, Jan 19, 2022 at 09:09:41PM +0100, Pavel Stehule wrote:\n> st 19. 1. 2022 v 9:01 odes�latel Julien Rouhaud <rjuju123@gmail.com> napsal:\n> \n> RAISE NOTICE should use local variables, and in this case is a question if we\n> should raise a warning. There are two possible analogies - we can see session\n> variables like global variables, and then the warning should not be raised,\n> or we can see relation between session variables and plpgsql variables\n> similar like session variables and some with higher priority, and then\n> warning should be raised. If we want to ensure that the new session variable\n> doesn't break code, then session variables should have lower priority than\n> plpgsql variables too. And because the plpgsql protection against collision\n> cannot be used, then I prefer raising the warning.\n\nAh that's indeed a good point. I agree, they're from a different part of the\nsystem so they should be treated as different things, and thus raising a\nwarning. It's consistent with the chosen conservative approach anyway.\n\n> PLpgSQL assignment should not be in collision with session variables ever\n\nAgreed.\n\n> \n> >\n> > =# DO $$\n> > BEGIN\n> > v := 'test';\n> > RAISE NOTICE 'v: %', v;\n> > END;\n> > $$ LANGUAGE plpgsql;\n> > ERROR: 42601: \"v\" is not a known variable\n> > LINE 3: v := 'test';\n> >\n> > But the RAISE NOTICE does see the session variable (which should be the\n> > correct\n> > behavior I think), so the warning should have been raised for this\n> > instruction\n> > (and in that case the message is incorrect, as it's not shadowing a\n> > column).\n> >\n> \n> In this case I can detect node type, and I can identify external param\n> node, but I cannot to detect if this code was executed from PLpgSQL or from\n> some other\n> \n> So I can to modify warning text to some\n\nYes, that's what I had in mind too.\n\n> DETAIL: The identifier can be column reference or query parameter or\n> session variable reference.\n> HINT: The column reference and query parameter is preferred against\n> session variable reference.\n> \n> I cannot to use term \"plpgsql variable\" becase I cannot to ensure validity\n> of this message\n> \n> Maybe is better to don't talk about source of this issue, and just talk\n> about result - so the warning text should be just\n> \n> MESSAGE: \"session variable \\\"xxxx\\\" is shadowed\n> DETAIL: \"session variables can be shadowed by columns, routine's variables\n> and routine's arguments with same name\"\n> \n> Is it better?\n\nI clearly prefer the 2nd version.\n\n\n", "msg_date": "Thu, 20 Jan 2022 06:03:40 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nst 19. 1. 2022 v 9:01 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi,\n>\n> On Tue, Jan 18, 2022 at 10:01:01PM +0100, Pavel Stehule wrote:\n> > pá 14. 1. 2022 v 3:44 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n> napsal:\n> >\n> > >\n> > > =# let myvariable = 'AA';\n> > > LET\n> > >\n> > > =# select 'AA' collate \"en-x-icu\" < myvariable;\n> > > ?column?\n> > > ----------\n> > > f\n> > > (1 row)\n> > >\n> > > =# select 'AA' collate \"en-x-icu\" < myvariable collate mycollation;\n> > > ERROR: 42P21: collation mismatch between explicit collations\n> \"en-x-icu\"\n> > > and \"mycollation\"\n> > > LINE 1: select 'AA' collate \"en-x-icu\" < myvariable collate mycollat...\n> > >\n> >\n> > What do you expect? I don't understand collating well, but it looks\n> > correct. Minimally the tables have the same behavior.\n>\n> Indeed, I actually didn't know that such object's collation were implicit\n> and\n> could be overloaded without a problem as long as there's no conflict\n> between\n> all the explicit collations. So I agree that the current behavior is ok,\n> including a correct handling for wanted conflicts:\n>\n> =# create variable var1 text collate \"fr-x-icu\";\n> CREATE VARIABLE\n>\n> =# create variable var2 text collate \"en-x-icu\";\n> CREATE VARIABLE\n>\n> =# let var1 = 'hoho';\n> LET\n>\n> =# let var2 = 'hoho';\n> LET\n>\n> =# select var1 < var2;\n> ERROR: 42P22: could not determine which collation to use for string\n> comparison\n> HINT: Use the COLLATE clause to set the collation explicitly.\n>\n> > Please, can you check the attached patches?\n>\n> All the issue I mentioned are fixed, thanks!\n>\n>\n> I see a few problems with the other new features added though. The new\n> session_variables_ambiguity_warning GUC is called even in contexts where it\n> shouldn't apply. For instance:\n>\n> =# set session_variables_ambiguity_warning = 1;\n> SET\n>\n> =# create variable v text;\n> CREATE VARIABLE\n>\n> =# DO $$\n> DECLARE v text;\n> BEGIN\n> v := 'test';\n> RAISE NOTICE 'v: %', v;\n> END;\n> $$ LANGUAGE plpgsql;\n> WARNING: 42702: session variable \"v\" is shadowed by column\n> LINE 1: v := 'test'\n> ^\n> DETAIL: The identifier can be column reference or session variable\n> reference.\n> HINT: The column reference is preferred against session variable\n> reference.\n> QUERY: v := 'test'\n>\n> But this \"v := 'test'\" shouldn't be a substitute for a LET, and it indeed\n> doesn't work:\n>\n> =# DO $$\n> BEGIN\n> v := 'test';\n> RAISE NOTICE 'v: %', v;\n> END;\n> $$ LANGUAGE plpgsql;\n> ERROR: 42601: \"v\" is not a known variable\n> LINE 3: v := 'test';\n>\n\nfixed\n\n\n>\n> But the RAISE NOTICE does see the session variable (which should be the\n> correct\n> behavior I think), so the warning should have been raised for this\n> instruction\n> (and in that case the message is incorrect, as it's not shadowing a\n> column).\n>\n> Also, the pg_dump handling emits a COLLATION option for session variables\n> even\n> for default collation, while it should only emit it if the collation is\n> not the\n> type's default collation. As a reference, for attributes the SQL used is:\n>\n> \"CASE WHEN a.attcollation\n> <> t.typcollation \"\n> \"THEN a.attcollation ELSE\n> 0 END AS attcollation,\\n\"\n>\n\nIsn't it a different issue? I don't see filtering DEFAULT_COLLATION_OID in\npg_dump code. But this case protects against a redundant COLLATE clause,\nand for consistency, this check should be done for variables too.\n\n<-->/*\n<--> * Find all the user attributes and their types.\n<--> *\n<--> * Since we only want to dump COLLATE clauses for attributes whose\n<--> * collation is different from their type's default, we use a CASE here\nto\n<--> * suppress uninteresting attcollations cheaply.\n<--> */\n\nfixed\n\n\n\n>\n> Also, should \\dV or \\dV+ show the collation?\n>\n\nI did it for \\dV\n\n\n>\n> And a few comments on the new chunks in this version of the patch (I didn't\n> look in detail at the whole patch yet):\n>\n> + <para>\n> + The session variables can be overshadowed by columns in an query.\n> When query\n> + holds identifier or qualified identifier that can be used as session\n> variable\n> + identifier and as column identifier too, then it is used as column\n> identifier\n> + every time. This situation can be logged by enabling configuration\n> + parameter <xref linkend=\"guc-session-variables-ambiguity-warning\"/>.\n> + </para>\n>\n> Is \"overshadowed\" correct? The rest of the patch only says \"shadow(ed)\".\n>\n> While at it, here's some proposition to improve the phrasing:\n>\n> + The session variables can be shadowed by column references in a query.\n> When a\n> + query contains identifiers or qualified identifiers that could be used\n> as both\n> + a session variable identifiers and as column identifier, then the column\n> + identifier is preferred every time. Warnings can be emitted when this\n> situation\n> + happens by enabling configuration parameter <xref\n> + linkend=\"guc-session-variables-ambiguity-warning\"/>.\n>\n> Similarly, the next documentation could be rephrased to:\n>\n> + When on, a warning is raised when any identifier in a query could be\n> used as both\n> + a column identifier or a session variable identifier.\n> + The default is <literal>off</literal>.\n>\n>\nchanged\n\n\n>\n> Few other nitpicking:\n>\n> + * If we really detect collision of column and variable\n> identifier,\n> + * then we prefer column, because we don't want to allow to\n> break\n> + * an existing valid queries by new variable.\n>\n> s/an existing/existing\n>\n\nrefactorized\n\n\n>\n> +-- it is ambigonuous, but columns are preferred\n>\n> ambiguous?\n>\n\nfixed\n\n\n>\n>\n> @@ -369,6 +367,19 @@ VariableCreate(const char *varName,\n> /* dependency on extension */\n> recordDependencyOnCurrentExtension(&myself, false);\n>\n> + /*\n> + * Normal dependency from a domain to its collation. We know the\n> default\n> + * collation is pinned, so don't bother recording it.\n> + */\n> + if (OidIsValid(varCollation) &&\n> + varCollation != DEFAULT_COLLATION_OID)\n>\n> The comment mentions domains rather than session variables.\n>\n>\nfixed\n\n\n> And for the initial patch, while looking around I found this comment on\n> fix_alternative_subplan():\n>\n\nthis is little bit strange - modified function is fix_scan_expr\n\n>\n> @@ -1866,7 +1969,9 @@ fix_alternative_subplan(PlannerInfo *root,\n> AlternativeSubPlan *asplan,\n> * replacing Aggref nodes that should be replaced by initplan output\n> Params,\n> * choosing the best implementation for AlternativeSubPlans,\n> * looking up operator opcode info for OpExpr and related nodes,\n> - * and adding OIDs from regclass Const nodes into\n> root->glob->relationOids.\n> + * and adding OIDs from regclass Const nodes into\n> root->glob->relationOids,\n> + * and replacing PARAM_VARIABLE paramid, that is the oid of the session\n> variable\n> + * to offset the array by query used session variables. ???\n>\n> I don't really understand the comment, and the \"???\" looks a bit\n> suspicious.\n> I'm assuming it's a reference to this new behavior in fix_param_node():\n>\n\nyes, I modified this comment\n\n\n>\n> * fix_param_node\n> * Do set_plan_references processing on a Param\n> + * Collect session variables list and replace variable oid by\n> + * index to collected list.\n> *\n> * If it's a PARAM_MULTIEXPR, replace it with the appropriate Param from\n> * root->multiexpr_params; otherwise no change is needed.\n> * Just for paranoia's sake, we make a copy of the node in either case.\n> + *\n> + * If it's a PARAM_VARIABLE, then we should to calculate paramid.\n>\n> Some improvement on the comments would be welcome there, probably including\n> some mention to the \"glob->sessionVariables\" collected list?\n>\n\ndone\n\nI am sending updated patches\n\nRegards\n\nPavel", "msg_date": "Fri, 21 Jan 2022 21:23:34 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nOn Fri, Jan 21, 2022 at 09:23:34PM +0100, Pavel Stehule wrote:\n> \n> st 19. 1. 2022 v 9:01 odes�latel Julien Rouhaud <rjuju123@gmail.com> napsal:\n> >\n> > Also, the pg_dump handling emits a COLLATION option for session variables\n> > even\n> > for default collation, while it should only emit it if the collation is\n> > not the\n> > type's default collation. As a reference, for attributes the SQL used is:\n> >\n> > \"CASE WHEN a.attcollation\n> > <> t.typcollation \"\n> > \"THEN a.attcollation ELSE\n> > 0 END AS attcollation,\\n\"\n> >\n> \n> Isn't it a different issue? I don't see filtering DEFAULT_COLLATION_OID in\n> pg_dump code. But this case protects against a redundant COLLATE clause,\n> and for consistency, this check should be done for variables too.\n\nYes, sorry my message was a bit ambiguous as for all native collatable types\nthe \"default\" collation is the type's default collation, I thought that the\ncode extract would make it clear enough.\n\nIn any case your fix is exactly what I had in mind so it's perfect, thanks!\n\n> I am sending updated patches\n\nThanks a lot! I will try to review them over the weekend.\n\n\n", "msg_date": "Sat, 22 Jan 2022 12:04:01 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nOn Fri, Jan 21, 2022 at 09:23:34PM +0100, Pavel Stehule wrote:\n> \n> I am sending updated patches\n\nI've been looking a bit deeper at the feature and I noticed that there's no\nlocking involved around the session variable usage, and I don't think that's\nok. AFAICS any variable used in a session will be cached in the local hash\ntable and will never try to access some catalog or cache, so I don't have any\nnaive scenario that would immediately crash, but this has some other\nimplications that seems debatable.\n\nFor instance, right now nothing prevents a variable from being dropped while\nanother session is using it.\n\nObviously we can't lock a session variable forever just because a session\nassigned a value once ages ago, especially outside of the current transaction.\nBut if a session set a variable in the local transaction, I don't think that\nit's ok to have a subsequent query failing because someone else concurrently\ndropped the variable.\n\nI only backlogged this current thread but I didn't see that being discussed.\n\n\n", "msg_date": "Sun, 23 Jan 2022 16:10:34 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "ne 23. 1. 2022 v 9:10 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi,\n>\n> On Fri, Jan 21, 2022 at 09:23:34PM +0100, Pavel Stehule wrote:\n> >\n> > I am sending updated patches\n>\n> I've been looking a bit deeper at the feature and I noticed that there's no\n> locking involved around the session variable usage, and I don't think\n> that's\n> ok. AFAICS any variable used in a session will be cached in the local hash\n> table and will never try to access some catalog or cache, so I don't have\n> any\n> naive scenario that would immediately crash, but this has some other\n> implications that seems debatable.\n>\n> For instance, right now nothing prevents a variable from being dropped\n> while\n> another session is using it.\n>\n> Obviously we can't lock a session variable forever just because a session\n> assigned a value once ages ago, especially outside of the current\n> transaction.\n> But if a session set a variable in the local transaction, I don't think\n> that\n> it's ok to have a subsequent query failing because someone else\n> concurrently\n> dropped the variable.\n>\n> I only backlogged this current thread but I didn't see that being\n> discussed.\n>\n\nIsn't there enough stability of the system cache? sinval is sent at the\nmoment when changes in the system catalog are visible. So inside query\nexecution I don't see that the variable was dropped in another session.\n\nne 23. 1. 2022 v 9:10 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:Hi,\n\nOn Fri, Jan 21, 2022 at 09:23:34PM +0100, Pavel Stehule wrote:\n> \n> I am sending updated patches\n\nI've been looking a bit deeper at the feature and I noticed that there's no\nlocking involved around the session variable usage, and I don't think that's\nok.  AFAICS any variable used in a session will be cached in the local hash\ntable and will never try to access some catalog or cache, so I don't have any\nnaive scenario that would immediately crash, but this has some other\nimplications that seems debatable.\n\nFor instance, right now nothing prevents a variable from being dropped while\nanother session is using it.\n\nObviously we can't lock a session variable forever just because a session\nassigned a value once ages ago, especially outside of the current transaction.\nBut if a session set a variable in the local transaction, I don't think that\nit's ok to have a subsequent query failing because someone else concurrently\ndropped the variable.\n\nI only backlogged this current thread but I didn't see that being discussed.Isn't there enough stability of the system cache? sinval is sent at the moment when changes in the system catalog are visible. So inside query execution I don't see that the variable was dropped in another session.", "msg_date": "Sun, 23 Jan 2022 09:25:56 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nOn Sun, Jan 23, 2022 at 09:25:56AM +0100, Pavel Stehule wrote:\n> ne 23. 1. 2022 v 9:10 odes�latel Julien Rouhaud <rjuju123@gmail.com> napsal:\n> \n> Isn't there enough stability of the system cache? sinval is sent at the\n> moment when changes in the system catalog are visible. So inside query\n> execution I don't see that the variable was dropped in another session.\n\nYes, inside a single query it should probably be ok, but I'm talking about\nmultiple query execution in the same transaction.\n\n\n", "msg_date": "Sun, 23 Jan 2022 16:52:45 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nne 23. 1. 2022 v 9:52 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi,\n>\n> On Sun, Jan 23, 2022 at 09:25:56AM +0100, Pavel Stehule wrote:\n> > ne 23. 1. 2022 v 9:10 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n> napsal:\n> >\n> > Isn't there enough stability of the system cache? sinval is sent at the\n> > moment when changes in the system catalog are visible. So inside query\n> > execution I don't see that the variable was dropped in another session.\n>\n> Yes, inside a single query it should probably be ok, but I'm talking about\n> multiple query execution in the same transaction.\n>\n\nI tested it now. a sinval message is waiting on the transaction end. So\nwhen a variable is used, then it is working fine until the transaction ends.\nBut when the session makes some DDL, then send sinval to self, and at this\nmoment, the variable can be dropped before the transaction ends.\n\nSo to be safe, the lock is required. I'll do it tomorrow.\n\nRegards\n\nPavel\n\nHine 23. 1. 2022 v 9:52 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:Hi,\n\nOn Sun, Jan 23, 2022 at 09:25:56AM +0100, Pavel Stehule wrote:\n> ne 23. 1. 2022 v 9:10 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n> \n> Isn't there enough stability of the system cache? sinval is sent at the\n> moment when changes in the system catalog are visible. So inside query\n> execution I don't see that the variable was dropped in another session.\n\nYes, inside a single query it should probably be ok, but I'm talking about\nmultiple query execution in the same transaction.I tested it now. a sinval message is waiting on the transaction end.  So when a variable is used, then it is working fine until the transaction ends.But when the session makes some DDL, then send sinval to self, and at this moment, the variable can be dropped before the transaction ends.So to be safe, the lock is required. I'll do it tomorrow.RegardsPavel", "msg_date": "Sun, 23 Jan 2022 15:33:33 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nLe dim. 23 janv. 2022 à 22:34, Pavel Stehule <pavel.stehule@gmail.com> a\nécrit :\n\n> I tested it now. a sinval message is waiting on the transaction end. So\n> when a variable is used, then it is working fine until the transaction ends.\n> But when the session makes some DDL, then send sinval to self, and at this\n> moment, the variable can be dropped before the transaction ends.\n>\n\na backend can accept sinval in very common scenarios, like acquiring a\nheavyweight lock. That includes accessing a relation thats not in the\ncatcache, so that's really critical to have a protection here.\n\nHi, Le dim. 23 janv. 2022 à 22:34, Pavel Stehule <pavel.stehule@gmail.com> a écrit :I tested it now. a sinval message is waiting on the transaction end.  So when a variable is used, then it is working fine until the transaction ends.But when the session makes some DDL, then send sinval to self, and at this moment, the variable can be dropped before the transaction ends.a backend can accept sinval in very common scenarios, like acquiring a heavyweight lock. That includes accessing a relation thats not in the catcache, so that's really critical to have a protection here.", "msg_date": "Sun, 23 Jan 2022 23:06:15 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nne 23. 1. 2022 v 16:06 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> Hi,\n>\n> Le dim. 23 janv. 2022 à 22:34, Pavel Stehule <pavel.stehule@gmail.com> a\n> écrit :\n>\n>> I tested it now. a sinval message is waiting on the transaction end. So\n>> when a variable is used, then it is working fine until the transaction ends.\n>> But when the session makes some DDL, then send sinval to self, and at\n>> this moment, the variable can be dropped before the transaction ends.\n>>\n>\n> a backend can accept sinval in very common scenarios, like acquiring a\n> heavyweight lock. That includes accessing a relation thats not in the\n> catcache, so that's really critical to have a protection here.\n>\n\nhere is updated patch with locking support\n\nRegards\n\nPavel", "msg_date": "Mon, 24 Jan 2022 12:33:11 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nOn Mon, Jan 24, 2022 at 12:33:11PM +0100, Pavel Stehule wrote:\n> \n> here is updated patch with locking support\n\nThanks for updating the patch!\n\nWhile the locking is globally working as intended, I found a few problems with\nit.\n\nFirst, I don't think that acquiring the lock in\nget_session_variable_type_typmod_collid() and prepare_variable_for_reading() is\nthe correct approach. In transformColumnRef() and transformLetStmt() you first\ncall IdentifyVariable() to check if the given name is a variable without\nlocking it and later try to lock the variable if you get a valid Oid. This is\nbug prone as any other backend could drop the variable between the two calls\nand you would end up with a cache lookup failure. I think the lock should be\nacquired during IdentifyVariable. It should probably be optional as one\ncodepath only needs the information to raise a warning when a variable is\nshadowed, so a concurrent drop isn't a problem there.\n\nFor prepare_variable_for_reading(), the callers are CopySessionVariable() and\nGetSessionVariable(). IIUC those should take care of executor-time locks, but\nshouldn't there be some changes for planning, like in AcquirePlannerLocks()?\n\nSome other comments on this part of the patch:\n\n@@ -717,6 +730,9 @@ RemoveSessionVariable(Oid varid)\n Relation rel;\n HeapTuple tup;\n\n+ /* Wait, when dropped variable is not used */\n+ LockDatabaseObject(VariableRelationId, varid, 0, AccessExclusiveLock);\n\nWhy do you explicitly try to acquire an AEL on the variable here?\nRemoveObjects / get_object_address should guarantee that this was already done.\nYou could add an assert LockHeldByMe() here, but no other code path do it so it\nwould probably waste cycles in assert builds for nothing as it's a fundamental\nguarantee.\n\n\n@@ -747,6 +763,9 @@ RemoveSessionVariable(Oid varid)\n * only when current transaction will be commited.\n */\n register_session_variable_xact_action(varid, ON_COMMIT_RESET);\n+\n+ /* Release lock */\n+ UnlockDatabaseObject(VariableRelationId, varid, 0, AccessExclusiveLock);\n }\n\nWhy releasing the lock here? It will be done at the end of the transaction,\nand you certainly don't want other backends to start using this variable in\nbetween. Also, since you acquired the lock a second time it only decreases the\nlock count in the locallock so the lock isn't released anyway.\n\n+ * Returns type, typmod and collid of session variable.\n+ *\n+ * As a side effect this function acquires AccessShareLock on the\n+ * related session variable.\n */\n void\n-get_session_variable_type_typmod_collid(Oid varid, Oid *typid, int32 *typmod, Oid *collid)\n+get_session_variable_type_typmod_collid(Oid varid, Oid *typid, int32 *typmod, Oid *collid,\n+ bool lock_held)\n\n\nlock_held is a bit misleading. If you keep some similar parameter for this or\nanother function, maybe name it lock_it or something like that instead?\n\nAlso, the comment isn't accurate and should say that an ASL is acquired iff the\nvariable is true.\n\n+ /*\n+ * Acquire a lock on session variable, which we won't release until commit.\n+ * This ensure that one backend cannot to drop session variable used by\n+ * second backend.\n+ */\n\n(and similar comments)\nI don't think it's necessary to explain why we acquire locks, we should just\nsay that the lock will be kept for the whole transaction (and not until a\ncommit)\n\nAnd while looking at nearby code, it's probably worthwhile to add an Assert in\ncreate_sessionvars_hashtable() to validate that sessionvars htab is NULL.\n\n\n", "msg_date": "Tue, 25 Jan 2022 13:18:45 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "út 25. 1. 2022 v 6:18 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi,\n>\n> On Mon, Jan 24, 2022 at 12:33:11PM +0100, Pavel Stehule wrote:\n> >\n> > here is updated patch with locking support\n>\n> Thanks for updating the patch!\n>\n> While the locking is globally working as intended, I found a few problems\n> with\n> it.\n>\n> First, I don't think that acquiring the lock in\n> get_session_variable_type_typmod_collid() and\n> prepare_variable_for_reading() is\n> the correct approach. In transformColumnRef() and transformLetStmt() you\n> first\n> call IdentifyVariable() to check if the given name is a variable without\n> locking it and later try to lock the variable if you get a valid Oid.\n> This is\n> bug prone as any other backend could drop the variable between the two\n> calls\n> and you would end up with a cache lookup failure. I think the lock should\n> be\n> acquired during IdentifyVariable. It should probably be optional as one\n> codepath only needs the information to raise a warning when a variable is\n> shadowed, so a concurrent drop isn't a problem there.\n>\n\nThere is a problem, because before the IdentifyVariable call I don't know\nif the variable will be shadowed or not.\n\nIf I lock a variable inside IdentifyVariable, then I need to remember if I\ndid lock there, or if the variable was locked already, and If the variable\nis shadowed and if lock is fresh, then I can unlock the variable.\n\n\nRegards\n\nPavel\n\nút 25. 1. 2022 v 6:18 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:Hi,\n\nOn Mon, Jan 24, 2022 at 12:33:11PM +0100, Pavel Stehule wrote:\n> \n> here is updated patch with locking support\n\nThanks for updating the patch!\n\nWhile the locking is globally working as intended, I found a few problems with\nit.\n\nFirst, I don't think that acquiring the lock in\nget_session_variable_type_typmod_collid() and prepare_variable_for_reading() is\nthe correct approach.  In transformColumnRef() and transformLetStmt() you first\ncall IdentifyVariable() to check if the given name is a variable without\nlocking it and later try to lock the variable if you get a valid Oid.  This is\nbug prone as any other backend could drop the variable between the two calls\nand you would end up with a cache lookup failure.  I think the lock should be\nacquired during IdentifyVariable.  It should probably be optional as one\ncodepath only needs the information to raise a warning when a variable is\nshadowed, so a concurrent drop isn't a problem there.There is a problem, because before the IdentifyVariable call I don't know if the variable will be shadowed or not.If I lock a variable inside IdentifyVariable, then I need to remember if I did lock there, or if the variable was locked already, and If the variable is shadowed and if lock is fresh, then I can unlock the variable.RegardsPavel", "msg_date": "Tue, 25 Jan 2022 09:35:09 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nOn Tue, Jan 25, 2022 at 09:35:09AM +0100, Pavel Stehule wrote:\n> �t 25. 1. 2022 v 6:18 odes�latel Julien Rouhaud <rjuju123@gmail.com> napsal:\n> \n> > I think the lock should be\n> > acquired during IdentifyVariable. It should probably be optional as one\n> > codepath only needs the information to raise a warning when a variable is\n> > shadowed, so a concurrent drop isn't a problem there.\n> >\n> \n> There is a problem, because before the IdentifyVariable call I don't know\n> if the variable will be shadowed or not.\n> \n> If I lock a variable inside IdentifyVariable, then I need to remember if I\n> did lock there, or if the variable was locked already, and If the variable\n> is shadowed and if lock is fresh, then I can unlock the variable.\n\nBut in transformColumnRef() you already know if you found a matching column or\nnot when calling IdentifyVariable(), so you know if an existing variable will\nshadow it right?\n\nCouldn't you call something like\n\n lockit = node == NULL;\n\tvarid = IdentifyVariable(cref->fields, &attrname, &not_unique, lockit);\n\nThe only other caller is transformLetStmt(), which should always lock the\nvariable anyway.\n\n\n", "msg_date": "Tue, 25 Jan 2022 16:48:21 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "út 25. 1. 2022 v 9:48 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi,\n>\n> On Tue, Jan 25, 2022 at 09:35:09AM +0100, Pavel Stehule wrote:\n> > út 25. 1. 2022 v 6:18 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n> napsal:\n> >\n> > > I think the lock should be\n> > > acquired during IdentifyVariable. It should probably be optional as\n> one\n> > > codepath only needs the information to raise a warning when a variable\n> is\n> > > shadowed, so a concurrent drop isn't a problem there.\n> > >\n> >\n> > There is a problem, because before the IdentifyVariable call I don't know\n> > if the variable will be shadowed or not.\n> >\n> > If I lock a variable inside IdentifyVariable, then I need to remember if\n> I\n> > did lock there, or if the variable was locked already, and If the\n> variable\n> > is shadowed and if lock is fresh, then I can unlock the variable.\n>\n> But in transformColumnRef() you already know if you found a matching\n> column or\n> not when calling IdentifyVariable(), so you know if an existing variable\n> will\n> shadow it right?\n>\n\nyes, you have true,\n\nThank you\n\n\n\n>\n> Couldn't you call something like\n>\n> lockit = node == NULL;\n> varid = IdentifyVariable(cref->fields, &attrname, &not_unique,\n> lockit);\n>\n> The only other caller is transformLetStmt(), which should always lock the\n> variable anyway.\n>\n\nút 25. 1. 2022 v 9:48 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:Hi,\n\nOn Tue, Jan 25, 2022 at 09:35:09AM +0100, Pavel Stehule wrote:\n> út 25. 1. 2022 v 6:18 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n> \n> > I think the lock should be\n> > acquired during IdentifyVariable.  It should probably be optional as one\n> > codepath only needs the information to raise a warning when a variable is\n> > shadowed, so a concurrent drop isn't a problem there.\n> >\n> \n> There is a problem, because before the IdentifyVariable call I don't know\n> if the variable will be shadowed or not.\n> \n> If I lock a variable inside IdentifyVariable, then I need to remember if I\n> did lock there, or if the variable was locked already, and If the variable\n> is shadowed and if lock is fresh, then I can unlock the variable.\n\nBut in transformColumnRef() you already know if you found a matching column or\nnot when calling IdentifyVariable(), so you know if an existing variable will\nshadow it right?yes, you have true,Thank you  \n\nCouldn't you call something like\n\n    lockit = node == NULL;\n        varid = IdentifyVariable(cref->fields, &attrname, &not_unique, lockit);\n\nThe only other caller is transformLetStmt(), which should always lock the\nvariable anyway.", "msg_date": "Tue, 25 Jan 2022 09:52:48 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nút 25. 1. 2022 v 6:18 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi,\n>\n> On Mon, Jan 24, 2022 at 12:33:11PM +0100, Pavel Stehule wrote:\n> >\n> > here is updated patch with locking support\n>\n> Thanks for updating the patch!\n>\n> While the locking is globally working as intended, I found a few problems\n> with\n> it.\n>\n> First, I don't think that acquiring the lock in\n> get_session_variable_type_typmod_collid() and\n> prepare_variable_for_reading() is\n> the correct approach. In transformColumnRef() and transformLetStmt() you\n> first\n> call IdentifyVariable() to check if the given name is a variable without\n> locking it and later try to lock the variable if you get a valid Oid.\n> This is\n> bug prone as any other backend could drop the variable between the two\n> calls\n> and you would end up with a cache lookup failure. I think the lock should\n> be\n> acquired during IdentifyVariable. It should probably be optional as one\n> codepath only needs the information to raise a warning when a variable is\n> shadowed, so a concurrent drop isn't a problem there.\n>\n\nI moved lock to IdentifyVariable routine\n\n\n>\n> For prepare_variable_for_reading(), the callers are CopySessionVariable()\n> and\n> GetSessionVariable(). IIUC those should take care of executor-time locks,\n> but\n> shouldn't there be some changes for planning, like in\n> AcquirePlannerLocks()?\n>\n\ndone\n\n\n>\n> Some other comments on this part of the patch:\n>\n> @@ -717,6 +730,9 @@ RemoveSessionVariable(Oid varid)\n> Relation rel;\n> HeapTuple tup;\n>\n> + /* Wait, when dropped variable is not used */\n> + LockDatabaseObject(VariableRelationId, varid, 0, AccessExclusiveLock);\n>\n> Why do you explicitly try to acquire an AEL on the variable here?\n> RemoveObjects / get_object_address should guarantee that this was already\n> done.\n> You could add an assert LockHeldByMe() here, but no other code path do it\n> so it\n> would probably waste cycles in assert builds for nothing as it's a\n> fundamental\n> guarantee.\n>\n>\nremoved\n\n\n>\n> @@ -747,6 +763,9 @@ RemoveSessionVariable(Oid varid)\n> * only when current transaction will be commited.\n> */\n> register_session_variable_xact_action(varid, ON_COMMIT_RESET);\n> +\n> + /* Release lock */\n> + UnlockDatabaseObject(VariableRelationId, varid, 0,\n> AccessExclusiveLock);\n> }\n>\n> Why releasing the lock here? It will be done at the end of the\n> transaction,\n> and you certainly don't want other backends to start using this variable in\n> between. Also, since you acquired the lock a second time it only\n> decreases the\n> lock count in the locallock so the lock isn't released anyway.\n>\n>\n removed\n\n+ * Returns type, typmod and collid of session variable.\n> + *\n> + * As a side effect this function acquires AccessShareLock on the\n> + * related session variable.\n> */\n> void\n> -get_session_variable_type_typmod_collid(Oid varid, Oid *typid, int32\n> *typmod, Oid *collid)\n> +get_session_variable_type_typmod_collid(Oid varid, Oid *typid, int32\n> *typmod, Oid *collid,\n> + bool lock_held)\n>\n>\n> lock_held is a bit misleading. If you keep some similar parameter for\n> this or\n> another function, maybe name it lock_it or something like that instead?\n>\n> Also, the comment isn't accurate and should say that an ASL is acquired\n> iff the\n> variable is true.\n>\n\nremoved\n\n\n\n>\n> + /*\n> + * Acquire a lock on session variable, which we won't release until\n> commit.\n> + * This ensure that one backend cannot to drop session variable used by\n> + * second backend.\n> + */\n>\n> (and similar comments)\n> I don't think it's necessary to explain why we acquire locks, we should\n> just\n> say that the lock will be kept for the whole transaction (and not until a\n> commit)\n>\n\nremoved\n\n\n>\n> And while looking at nearby code, it's probably worthwhile to add an\n> Assert in\n> create_sessionvars_hashtable() to validate that sessionvars htab is NULL.\n>\n\ndone\n\nattached updated patch\n\nRegards\n\nPavel", "msg_date": "Tue, 25 Jan 2022 22:53:00 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nOn Tue, Jan 25, 2022 at 10:53:00PM +0100, Pavel Stehule wrote:\n> \n> �t 25. 1. 2022 v 6:18 odes�latel Julien Rouhaud <rjuju123@gmail.com> napsal:\n> >\n> > First, I don't think that acquiring the lock in\n> > get_session_variable_type_typmod_collid() and\n> > prepare_variable_for_reading() is\n> > the correct approach. In transformColumnRef() and transformLetStmt() you\n> > first\n> > call IdentifyVariable() to check if the given name is a variable without\n> > locking it and later try to lock the variable if you get a valid Oid.\n> > This is\n> > bug prone as any other backend could drop the variable between the two\n> > calls\n> > and you would end up with a cache lookup failure. I think the lock should\n> > be\n> > acquired during IdentifyVariable. It should probably be optional as one\n> > codepath only needs the information to raise a warning when a variable is\n> > shadowed, so a concurrent drop isn't a problem there.\n> >\n> \n> I moved lock to IdentifyVariable routine\n\n+IdentifyVariable(List *names, char **attrname, bool lockit, bool *not_unique)\n+{\n[...]\n+ return varoid_without_attr;\n+ }\n+ else\n+ {\n+ *attrname = c;\n+ return varoid_with_attr;\n[...]\n+\n+ if (OidIsValid(varid) && lockit)\n+ LockDatabaseObject(VariableRelationId, varid, 0, AccessShareLock);\n+\n+ return varid;\n\nThere are still some code paths that may not lock the target variable when\nrequired.\n\nAlso, the function comment doesn't say much about attrname handling, it should\nbe clarifed. I think it should initially be set to NULL, to make sure that\nit's always a valid pointer after the function returns.\n\n\n> attached updated patch\n\nVarious comments on the patch:\n\nNo test for GRANT/REVOKE ... ALL VARIABLES IN SCHEMA, maybe it would be good to\nhave one?\n\nDocumentation:\n\ncatalogs.sgml:\n\nYou're still using the old-style 4 columns table, it should be a single column\nlike the rest of the file.\n\n+ <para>\n+ The <command>CREATE VARIABLE</command> command creates a session variable.\n+ Session variables, like relations, exist within a schema and their access is\n+ controlled via <command>GRANT</command> and <command>REVOKE</command>\n+ commands. Changing a session variable is non-transactional.\n+ </para>\n\nThe \"changing a session variable is non-transactional\" is ambiguous. I think\nthat only the value part isn't transactional, the variable metadata themselves\n(ALTER VARIABLE and other DDL) are transactional right? This should be\nexplicitly described here (although it's made less ambiguous in the next\nparagraph).\n\n+ <para>\n+ Session variables are retrieved by the <command>SELECT</command> SQL\n+ command. Their value is set with the <command>LET</command> SQL command.\n+ While session variables share properties with tables, their value cannot be\n+ updated with an <command>UPDATE</command> command.\n+ </para>\n\nshould this part mention that session variables can be shadowed? For now the\nonly mention to that is in advanced.sgml.\n\n+ The <literal>DEFAULT</literal> clause can be used to assign a default\n+ value to a session variable.\n\nThe expression is lazily evaluated during the session first use of the\nvariable. This should be documented as any usage of volatile expression will\nbe impacted.\n\n+ The <literal>ON TRANSACTION END RESET</literal>\n+ clause causes the session variable to be reset to its default value when\n+ the transaction is committed or rolled back.\n\nAs far as I can see this clauses doesn't play well with IMMUTABLE VARIABLE, as\nyou can reassign a value once the transaction ends. Same for DISCARD [ ALL |\nVARIABLES ], or LET var = NULL (or DEFAULT if no default value). Is that\nintended?\n\n+ <literal>LET</literal> extends the syntax defined in the SQL\n+ standard. The <literal>SET</literal> command from the SQL standard\n+ is used for different purposes in <productname>PostgreSQL</productname>.\n\nI don't fully understand that. Are (session) variables defined in the SQL\nstandard? If yes, all the other documentation pages should clarify that as\nthey currently say that this is a postgres extension. If not, this part should\nmade it clear what is defined in the standard.\n\nIn revoke.sgml:\n+ REVOKE [ GRANT OPTION FOR ]\n+ { { READ | WRITE } [, ...] | ALL [ PRIVILEGES ] }\n+ ON VARIABLE <replaceable>variable_name</replaceable> [, ...]\n+ FROM { [ GROUP ] <replaceable class=\"parameter\">role_name</replaceable> | PUBLIC } [, ...]\n+ [ CASCADE | RESTRICT ]\n\nthere's no extra documentation for that, and therefore no clarification on\nvariable_name.\n\nVariableIsVisible():\n+\t\t * If it is in the path, it might still not be visible; it could be\n+\t\t * hidden by another relation of the same name earlier in the path. So\n+\t\t * we must do a slow check for conflicting relations.\n\nshould it be \"another variable of the same name\"?\n\n\nTab completion: CREATE IMMUTABLE VARIABLE is not handled\n\n\npg_variable.c:\nDo we really need both session_variable_get_name() and\nget_session_variable_name()?\n\n+/*\n+ * Fetch all fields of session variable from the syscache.\n+ */\n+void\n+initVariable(Variable *var, Oid varid, bool missing_ok, bool fast_only)\n\nAs least fast_only should be documented in the function comment, especially\nregarding var->varname, since:\n\n+ var->oid = varid;\n+ var->name = pstrdup(NameStr(varform->varname));\n[...]\n+ if (!fast_only)\n+ {\n+ Datum aclDatum;\n+ bool isnull;\n+\n+ /* name */\n+ var->name = pstrdup(NameStr(varform->varname));A\n[...]\n+ else\n+ {\n+ var->name = NULL;\n\nis the output value guaranteed or not? In any case it shouldn't be set twice.\n\nAlso, I don't see any caller for missing_ok == true, should we remove it?\n\n+/*\n+ * Create entry in pg_variable table\n+ */\n+ObjectAddress\n+VariableCreate(const char *varName,\n[...]\n+ /* dependency on any roles mentioned in ACL */\n+ if (varacl != NULL)\n+ {\n+ int nnewmembers;\n+ Oid *newmembers;\n+\n+ nnewmembers = aclmembers(varacl, &newmembers);\n+ updateAclDependencies(VariableRelationId, varid, 0,\n+ varOwner,\n+ 0, NULL,\n+ nnewmembers, newmembers);\n\nShouldn't you use recordDependencyOnNewAcl() instead? Also, sn't it missing a\nrecordDependencyOnOwner()?\n\nsessionvariable.c:\n\n+ * Although session variables are not transactional, we don't\n+ * want (and we cannot) to run cleaning immediately (when we\n+ * got sinval message). The value of session variables can\n+ * be still used or the operation that emits cleaning can be\n+ * reverted. Unfortunatelly, this check can be done only in\n+ * when transaction is committed (the check against system\n+ * catalog requires transaction state).\n\nThis was the original idea, but since there's now locking to make all DDL safe,\nthe metadata should be considered fully transactional and no session should\nstill be able to use a concurrently dropped variable. Also, the invalidation\nmessages are not sent until the transaction is committed. So is that approach\nstill needed (at least for things outside ON COMMIT DROP / ON TRANSACTION END\nRESET)?\n\nI'm also attaching a 3rd patch with some proposition for documentation\nrewording (including consistent use of *session* variable), a few comments\nrewording, copyright year bump and minor things like that.\n\nNote that I still didn't really review pg_variable.c or sessionvariable.c since\nthere might be significant changes there for either the sinval / immutable part\nI mentioned.", "msg_date": "Wed, 26 Jan 2022 15:23:06 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "> sessionvariable.c:\n>\n> + * Although session variables are not transactional, we don't\n> + * want (and we cannot) to run cleaning immediately (when we\n> + * got sinval message). The value of session variables can\n> + * be still used or the operation that emits cleaning can be\n> + * reverted. Unfortunatelly, this check can be done only in\n> + * when transaction is committed (the check against system\n> + * catalog requires transaction state).\n>\n> This was the original idea, but since there's now locking to make all DDL\n> safe,\n> the metadata should be considered fully transactional and no session should\n> still be able to use a concurrently dropped variable. Also, the\n> invalidation\n> messages are not sent until the transaction is committed. So is that\n> approach\n> still needed (at least for things outside ON COMMIT DROP / ON TRANSACTION\n> END\n> RESET\n>\n\nI think this is still necessary. The lock protects the variable against\ndrop from the second session, but not for reverted deletion from the\ncurrent session.\n\nThis implementation is due Tomas's request for\n\nCREATE VARIABLE xx AS int;\nLET xx = 100;\nBEGIN;\nDROP VARIABLE xx;\nROLLBACK;\nSELECT xx; --> 100\n\nand the variable still holds the last value before DROP\n\nPersonally, this is a corner case (for me, and I think so for users it is\nnot too interesting, and important), and this behavior is not necessary -\noriginally I implemented just the RESET variable in this case. On the other\nhand, this is a nice feature, and there is an analogy with TRUNCATE\nbehavior.\n\nMore, I promised, as a second step, implementation of optional\ntransactional behavior of session variables. And related code is necessary\nfor it. So I prefer to use related code without change.\n\nRegards\n\nPavel\n\n\nsessionvariable.c:\n\n+ * Although session variables are not transactional, we don't\n+ * want (and we cannot) to run cleaning immediately (when we\n+ * got sinval message). The value of session variables can\n+ * be still used or the operation that emits cleaning can be\n+ * reverted. Unfortunatelly, this check can be done only in\n+ * when transaction is committed (the check against system\n+ * catalog requires transaction state).\n\nThis was the original idea, but since there's now locking to make all DDL safe,\nthe metadata should be considered fully transactional and no session should\nstill be able to use a concurrently dropped variable.  Also, the invalidation\nmessages are not sent until the transaction is committed.  So is that approach\nstill needed (at least for things outside ON COMMIT DROP / ON TRANSACTION END\nRESETI think this is still necessary. The lock protects the variable against drop from the second session, but not for reverted deletion from the current session.This implementation is due Tomas's request forCREATE VARIABLE xx AS int;LET xx = 100;BEGIN;DROP VARIABLE xx;ROLLBACK;SELECT xx; --> 100and the variable still holds the last value before DROPPersonally, this is a corner case (for me, and I think so for users it is not too interesting, and important),  and this behavior is not necessary - originally I implemented just the RESET variable in this case. On the other hand, this is a nice feature, and there is an analogy with TRUNCATE behavior. More, I promised, as a second step, implementation of optional transactional behavior of session variables. And related code is necessary for it. So I prefer to use related code without change.RegardsPavel", "msg_date": "Wed, 26 Jan 2022 14:43:54 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nOn Wed, Jan 26, 2022 at 02:43:54PM +0100, Pavel Stehule wrote:\n> \n> I think this is still necessary. The lock protects the variable against\n> drop from the second session, but not for reverted deletion from the\n> current session.\n> \n> This implementation is due Tomas's request for\n> \n> CREATE VARIABLE xx AS int;\n> LET xx = 100;\n> BEGIN;\n> DROP VARIABLE xx;\n> ROLLBACK;\n> SELECT xx; --> 100\n> \n> and the variable still holds the last value before DROP\n\nI thought about this case, but assumed that the own session wouldn't process\nthe inval until commit. Agreed then, although the comment should clarify the\ntransactional behavior and why it's still necessary.\n\n> Personally, this is a corner case (for me, and I think so for users it is\n> not too interesting, and important), and this behavior is not necessary -\n> originally I implemented just the RESET variable in this case. On the other\n> hand, this is a nice feature, and there is an analogy with TRUNCATE\n> behavior.\n> \n> More, I promised, as a second step, implementation of optional\n> transactional behavior of session variables. And related code is necessary\n> for it. So I prefer to use related code without change.\n\nThat's another good reason, so fine by me!\n\n\n", "msg_date": "Wed, 26 Jan 2022 21:55:27 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "st 26. 1. 2022 v 8:23 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi,\n>\n> On Tue, Jan 25, 2022 at 10:53:00PM +0100, Pavel Stehule wrote:\n> >\n> > út 25. 1. 2022 v 6:18 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n> napsal:\n> > >\n> > > First, I don't think that acquiring the lock in\n> > > get_session_variable_type_typmod_collid() and\n> > > prepare_variable_for_reading() is\n> > > the correct approach. In transformColumnRef() and transformLetStmt()\n> you\n> > > first\n> > > call IdentifyVariable() to check if the given name is a variable\n> without\n> > > locking it and later try to lock the variable if you get a valid Oid.\n> > > This is\n> > > bug prone as any other backend could drop the variable between the two\n> > > calls\n> > > and you would end up with a cache lookup failure. I think the lock\n> should\n> > > be\n> > > acquired during IdentifyVariable. It should probably be optional as\n> one\n> > > codepath only needs the information to raise a warning when a variable\n> is\n> > > shadowed, so a concurrent drop isn't a problem there.\n> > >\n> >\n> > I moved lock to IdentifyVariable routine\n>\n> +IdentifyVariable(List *names, char **attrname, bool lockit, bool\n> *not_unique)\n> +{\n> [...]\n> + return varoid_without_attr;\n> + }\n> + else\n> + {\n> + *attrname = c;\n> + return varoid_with_attr;\n> [...]\n> +\n> + if (OidIsValid(varid) && lockit)\n> + LockDatabaseObject(VariableRelationId, varid, 0, AccessShareLock);\n> +\n> + return varid;\n>\n> There are still some code paths that may not lock the target variable when\n> required.\n>\n\nfixed\n\n\n>\n> Also, the function comment doesn't say much about attrname handling, it\n> should\n> be clarifed. I think it should initially be set to NULL, to make sure that\n> it's always a valid pointer after the function returns.\n>\n\ndone\n\n\n>\n>\n> > attached updated patch\n>\n\n> Various comments on the patch:\n>\n> No test for GRANT/REVOKE ... ALL VARIABLES IN SCHEMA, maybe it would be\n> good to\n> have one?\n>\n\ndone\n\n\n>\n> Documentation:\n>\n> catalogs.sgml:\n>\n> You're still using the old-style 4 columns table, it should be a single\n> column\n> like the rest of the file.\n>\n\ndone\n\n\n>\n> + <para>\n> + The <command>CREATE VARIABLE</command> command creates a session\n> variable.\n> + Session variables, like relations, exist within a schema and their\n> access is\n> + controlled via <command>GRANT</command> and <command>REVOKE</command>\n> + commands. Changing a session variable is non-transactional.\n> + </para>\n>\n> The \"changing a session variable is non-transactional\" is ambiguous. I\n> think\n> that only the value part isn't transactional, the variable metadata\n> themselves\n> (ALTER VARIABLE and other DDL) are transactional right? This should be\n> explicitly described here (although it's made less ambiguous in the next\n> paragraph).\n>\n\nsure, DDL of session variables are transactional. I removed this sentence.\n\n\n> + <para>\n> + Session variables are retrieved by the <command>SELECT</command> SQL\n> + command. Their value is set with the <command>LET</command> SQL\n> command.\n> + While session variables share properties with tables, their value\n> cannot be\n> + updated with an <command>UPDATE</command> command.\n> + </para>\n>\n> should this part mention that session variables can be shadowed? For now\n> the\n> only mention to that is in advanced.sgml.\n>\n\ngood idea, I wrote note about it there\n\n\n>\n> + The <literal>DEFAULT</literal> clause can be used to assign a\n> default\n> + value to a session variable.\n>\n> The expression is lazily evaluated during the session first use of the\n> variable. This should be documented as any usage of volatile expression\n> will\n> be impacted.\n>\n\ndone\n\n\n\n>\n> + The <literal>ON TRANSACTION END RESET</literal>\n> + clause causes the session variable to be reset to its default value\n> when\n> + the transaction is committed or rolled back.\n>\n> As far as I can see this clauses doesn't play well with IMMUTABLE\n> VARIABLE, as\n> you can reassign a value once the transaction ends. Same for DISCARD [\n> ALL |\n> VARIABLES ], or LET var = NULL (or DEFAULT if no default value). Is that\n> intended?\n>\n\nI think so it is expected. The life scope of assigned (immutable) value is\nlimited to transaction (when ON TRANSACTION END RESET).\nDISCARD is used for reset of session, and after it, you can write the value\nfirst time.\n\nI enhanced doc in IMMUTABLE clause\n\n\n> + <literal>LET</literal> extends the syntax defined in the SQL\n> + standard. The <literal>SET</literal> command from the SQL standard\n> + is used for different purposes in\n> <productname>PostgreSQL</productname>.\n>\n> I don't fully understand that. Are (session) variables defined in the SQL\n> standard? If yes, all the other documentation pages should clarify that as\n> they currently say that this is a postgres extension. If not, this part\n> should\n> made it clear what is defined in the standard.\n>\n\nI reread standard more carefully, and it looks so SQL/PSM doesn't define\nglobal variables ever. The modules defined by SQL/PSM can holds only\ntemporal tables or routines. Unfortunately, this part of standard is almost\ndead, and there is not referential implementation. The most near to\nstandard in this area is DB2, but global session variables are proprietary\nfeature. The usage is very similar to our session variables with one\nsignificant difference - the global session variables can be modified by\ncommands SELECT INTO, VALUES INTO, EXECUTE INTO and SET (Our session\nvariables can be modified just by LET command.). I am sure, so if SQL/PSM\nsupports global session variables, then it uses SET statement - like DB2,\nbut I didn't find any note about support in standard.\n\nI think so the best comment to compatibility is just\n\n <para>\n The <command>LET</command> is a <productname>PostgreSQL</productname>\n extension.\n </para>\n\n\n\n>\n> In revoke.sgml:\n> + REVOKE [ GRANT OPTION FOR ]\n> + { { READ | WRITE } [, ...] | ALL [ PRIVILEGES ] }\n> + ON VARIABLE <replaceable>variable_name</replaceable> [, ...]\n> + FROM { [ GROUP ] <replaceable\n> class=\"parameter\">role_name</replaceable> | PUBLIC } [, ...]\n> + [ CASCADE | RESTRICT ]\n>\n> there's no extra documentation for that, and therefore no clarification on\n> variable_name.\n>\n\nThis is same like function_name, domain_name, ...\n\n\n>\n> VariableIsVisible():\n> + * If it is in the path, it might still not be visible; it\n> could be\n> + * hidden by another relation of the same name earlier in\n> the path. So\n> + * we must do a slow check for conflicting relations.\n>\n> should it be \"another variable of the same name\"?\n>\n>\nyes, fixed\n\n\n\n>\n> Tab completion: CREATE IMMUTABLE VARIABLE is not handled\n>\n\nfixed\n\n\n>\n>\n> pg_variable.c:\n> Do we really need both session_variable_get_name() and\n> get_session_variable_name()?\n>\n\nThey are different - first returns possibly qualified name, second returns\nonly name. Currently it is used just for error messages in\ntransformAssignmentIndirection, and I think so it is good for consistency\nwith other usage of this routine (transformAssignmentIndirection).\n\n\n>\n> +/*\n> + * Fetch all fields of session variable from the syscache.\n> + */\n> +void\n> +initVariable(Variable *var, Oid varid, bool missing_ok, bool fast_only)\n>\n> As least fast_only should be documented in the function comment, especially\n> regarding var->varname, since:\n>\n> + var->oid = varid;\n> + var->name = pstrdup(NameStr(varform->varname));\n> [...]\n> + if (!fast_only)\n> + {\n> + Datum aclDatum;\n> + bool isnull;\n> +\n> + /* name */\n> + var->name = pstrdup(NameStr(varform->varname));A\n> [...]\n> + else\n> + {\n> + var->name = NULL;\n>\n> is the output value guaranteed or not? In any case it shouldn't be set\n> twice.\n>\n\nIt was messed, fixed\n\n\n>\n> Also, I don't see any caller for missing_ok == true, should we remove it?\n>\n\nremoved\n\n\n>\n> +/*\n> + * Create entry in pg_variable table\n> + */\n> +ObjectAddress\n> +VariableCreate(const char *varName,\n> [...]\n> + /* dependency on any roles mentioned in ACL */\n> + if (varacl != NULL)\n> + {\n> + int nnewmembers;\n> + Oid *newmembers;\n> +\n> + nnewmembers = aclmembers(varacl, &newmembers);\n> + updateAclDependencies(VariableRelationId, varid, 0,\n> + varOwner,\n> + 0, NULL,\n> + nnewmembers, newmembers);\n>\n> Shouldn't you use recordDependencyOnNewAcl() instead? Also, sn't it\n> missing a\n> recordDependencyOnOwner()?\n>\n\nchanged and fixed\n\n\n>\n> sessionvariable.c:\n>\n> + * Although session variables are not transactional, we don't\n> + * want (and we cannot) to run cleaning immediately (when we\n> + * got sinval message). The value of session variables can\n> + * be still used or the operation that emits cleaning can be\n> + * reverted. Unfortunatelly, this check can be done only in\n> + * when transaction is committed (the check against system\n> + * catalog requires transaction state).\n>\n> This was the original idea, but since there's now locking to make all DDL\n> safe,\n> the metadata should be considered fully transactional and no session should\n> still be able to use a concurrently dropped variable. Also, the\n> invalidation\n> messages are not sent until the transaction is committed. So is that\n> approach\n> still needed (at least for things outside ON COMMIT DROP / ON TRANSACTION\n> END\n> RESET)?\n>\n\nI enhanced comment\n\n\n>\n> I'm also attaching a 3rd patch with some proposition for documentation\n> rewording (including consistent use of *session* variable), a few comments\n> rewording, copyright year bump and minor things like that.\n>\n\nThank you very much for it. This patch is based on your changes.\n\nRegards\n\nPavel\n\n\n>\n> Note that I still didn't really review pg_variable.c or sessionvariable.c\n> since\n> there might be significant changes there for either the sinval / immutable\n> part\n> I mentioned.\n>", "msg_date": "Fri, 28 Jan 2022 07:51:08 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nOn Fri, Jan 28, 2022 at 07:51:08AM +0100, Pavel Stehule wrote:\n> st 26. 1. 2022 v 8:23 odes�latel Julien Rouhaud <rjuju123@gmail.com> napsal:\n> \n> > + The <literal>ON TRANSACTION END RESET</literal>\n> > + clause causes the session variable to be reset to its default value\n> > when\n> > + the transaction is committed or rolled back.\n> >\n> > As far as I can see this clauses doesn't play well with IMMUTABLE\n> > VARIABLE, as\n> > you can reassign a value once the transaction ends. Same for DISCARD [\n> > ALL |\n> > VARIABLES ], or LET var = NULL (or DEFAULT if no default value). Is that\n> > intended?\n> >\n> \n> I think so it is expected. The life scope of assigned (immutable) value is\n> limited to transaction (when ON TRANSACTION END RESET).\n> DISCARD is used for reset of session, and after it, you can write the value\n> first time.\n> \n> I enhanced doc in IMMUTABLE clause\n\nI think it's still somewhat unclear:\n\n- done, no other change will be allowed in the session lifetime.\n+ done, no other change will be allowed in the session variable content's\n+ lifetime. The lifetime of content of session variable can be\n+ controlled by <literal>ON TRANSACTION END RESET</literal> clause.\n+ </para>\n\nThe \"session variable content lifetime\" is quite peculiar, as the ON\nTRANSACTION END RESET is adding transactional behavior to something that's not\nsupposed to be transactional, so more documentation about it seems appropriate.\n\nAlso DISCARD can be used any time so that's a totally different aspect of the\nimmutable variable content lifetime that's not described here.\n\nNULL handling also seems inconsistent. An explicit default NULL value makes it\ntruly immutable, but manually assigning NULL is a different codepath that has a\ndifferent user behavior:\n\n# create immutable variable var_immu int default null;\nCREATE VARIABLE\n\n# let var_immu = 1;\nERROR: 22005: session variable \"var_immu\" is declared IMMUTABLE\n\n# create immutable variable var_immu2 int ;\nCREATE VARIABLE\n\n# let var_immu2 = null;\nLET\n\n# let var_immu2 = null;\nLET\n\n# let var_immu2 = 1;\nLET\n\nFor var_immu2 I think that the last 2 queries should have errored out.\n\n> > In revoke.sgml:\n> > + REVOKE [ GRANT OPTION FOR ]\n> > + { { READ | WRITE } [, ...] | ALL [ PRIVILEGES ] }\n> > + ON VARIABLE <replaceable>variable_name</replaceable> [, ...]\n> > + FROM { [ GROUP ] <replaceable\n> > class=\"parameter\">role_name</replaceable> | PUBLIC } [, ...]\n> > + [ CASCADE | RESTRICT ]\n> >\n> > there's no extra documentation for that, and therefore no clarification on\n> > variable_name.\n> >\n> \n> This is same like function_name, domain_name, ...\n\nAh right.\n\n> > pg_variable.c:\n> > Do we really need both session_variable_get_name() and\n> > get_session_variable_name()?\n> >\n> \n> They are different - first returns possibly qualified name, second returns\n> only name. Currently it is used just for error messages in\n> transformAssignmentIndirection, and I think so it is good for consistency\n> with other usage of this routine (transformAssignmentIndirection).\n\nI agree that consistency with other usage is a good thing, but both functions\nhave very similar and confusing names. Usually when you need the qualified\nname the calling code just takes care of doing so. Wouldn't it be better to\nadd say get_session_variable_namespace() and construct the target string in the\ncalling code?\n\nAlso, I didn't dig a lot but I didn't see other usage with optionally qualified\nname there? I'm not sure how it would make sense anyway since LET semantics\nare different and the current call for session variable emit incorrect\nmessages:\n\n# create table tt(id integer);\nCREATE TABLE\n\n# create variable vv tt;\nCREATE VARIABLE\n\n# let vv.meh = 1;\nERROR: 42703: cannot assign to field \"meh\" of column \"meh\" because there is no such column in data type tt\nLINE 1: let vv.meh = 1;\n\n\n", "msg_date": "Sat, 29 Jan 2022 13:19:46 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "so 29. 1. 2022 v 6:19 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi,\n>\n> On Fri, Jan 28, 2022 at 07:51:08AM +0100, Pavel Stehule wrote:\n> > st 26. 1. 2022 v 8:23 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n> napsal:\n> >\n> > > + The <literal>ON TRANSACTION END RESET</literal>\n> > > + clause causes the session variable to be reset to its default\n> value\n> > > when\n> > > + the transaction is committed or rolled back.\n> > >\n> > > As far as I can see this clauses doesn't play well with IMMUTABLE\n> > > VARIABLE, as\n> > > you can reassign a value once the transaction ends. Same for DISCARD [\n> > > ALL |\n> > > VARIABLES ], or LET var = NULL (or DEFAULT if no default value). Is\n> that\n> > > intended?\n> > >\n> >\n> > I think so it is expected. The life scope of assigned (immutable) value\n> is\n> > limited to transaction (when ON TRANSACTION END RESET).\n> > DISCARD is used for reset of session, and after it, you can write the\n> value\n> > first time.\n> >\n> > I enhanced doc in IMMUTABLE clause\n>\n> I think it's still somewhat unclear:\n>\n> - done, no other change will be allowed in the session lifetime.\n> + done, no other change will be allowed in the session variable\n> content's\n> + lifetime. The lifetime of content of session variable can be\n> + controlled by <literal>ON TRANSACTION END RESET</literal> clause.\n> + </para>\n>\n> The \"session variable content lifetime\" is quite peculiar, as the ON\n> TRANSACTION END RESET is adding transactional behavior to something that's\n> not\n> supposed to be transactional, so more documentation about it seems\n> appropriate.\n>\n> Also DISCARD can be used any time so that's a totally different aspect of\n> the\n> immutable variable content lifetime that's not described here.\n>\n\nfixed\n\n\n\n\n>\n> NULL handling also seems inconsistent. An explicit default NULL value\n> makes it\n> truly immutable, but manually assigning NULL is a different codepath that\n> has a\n> different user behavior:\n>\n> # create immutable variable var_immu int default null;\n> CREATE VARIABLE\n>\n> # let var_immu = 1;\n> ERROR: 22005: session variable \"var_immu\" is declared IMMUTABLE\n>\n> # create immutable variable var_immu2 int ;\n> CREATE VARIABLE\n>\n> # let var_immu2 = null;\n> LET\n>\n> # let var_immu2 = null;\n> LET\n>\n> # let var_immu2 = 1;\n> LET\n>\n> For var_immu2 I think that the last 2 queries should have errored out.\n>\n\nok, I changed this behave\n\n\n>\n> > > In revoke.sgml:\n> > > + REVOKE [ GRANT OPTION FOR ]\n> > > + { { READ | WRITE } [, ...] | ALL [ PRIVILEGES ] }\n> > > + ON VARIABLE <replaceable>variable_name</replaceable> [, ...]\n> > > + FROM { [ GROUP ] <replaceable\n> > > class=\"parameter\">role_name</replaceable> | PUBLIC } [, ...]\n> > > + [ CASCADE | RESTRICT ]\n> > >\n> > > there's no extra documentation for that, and therefore no\n> clarification on\n> > > variable_name.\n> > >\n> >\n> > This is same like function_name, domain_name, ...\n>\n> Ah right.\n>\n> > > pg_variable.c:\n> > > Do we really need both session_variable_get_name() and\n> > > get_session_variable_name()?\n> > >\n> >\n> > They are different - first returns possibly qualified name, second\n> returns\n> > only name. Currently it is used just for error messages in\n> > transformAssignmentIndirection, and I think so it is good for consistency\n> > with other usage of this routine (transformAssignmentIndirection).\n>\n> I agree that consistency with other usage is a good thing, but both\n> functions\n> have very similar and confusing names. Usually when you need the qualified\n> name the calling code just takes care of doing so. Wouldn't it be better\n> to\n> add say get_session_variable_namespace() and construct the target string\n> in the\n> calling code?\n>\n\nok, I rewrote related code\n\n\n>\n> Also, I didn't dig a lot but I didn't see other usage with optionally\n> qualified\n> name there? I'm not sure how it would make sense anyway since LET\n> semantics\n> are different and the current call for session variable emit incorrect\n> messages:\n>\n\nchanged\n\n\n> # create table tt(id integer);\n> CREATE TABLE\n>\n> # create variable vv tt;\n> CREATE VARIABLE\n>\n> # let vv.meh = 1;\n> ERROR: 42703: cannot assign to field \"meh\" of column \"meh\" because there\n> is no such column in data type tt\n> LINE 1: let vv.meh = 1;\n>\n\n fixed\n\npostgres=# create table tt(id integer); create variable vv tt;\nCREATE TABLE\nCREATE VARIABLE\npostgres=# let vv.meh = 1;\nERROR: cannot assign to field \"meh\" of column or variable \"vv\" because\nthere is no such column in data type tt\nLINE 1: let vv.meh = 1;\n ^\n\nRegards\n\nPavel", "msg_date": "Sun, 30 Jan 2022 14:15:31 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nrebase after 02b8048ba5dc36238f3e7c3c58c5946220298d71\n\nRegards\n\nPavel", "msg_date": "Sun, 30 Jan 2022 20:09:18 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nOn Sun, Jan 30, 2022 at 08:09:18PM +0100, Pavel Stehule wrote:\n> \n> rebase after 02b8048ba5dc36238f3e7c3c58c5946220298d71\n\nHere are a few comments, mostly about pg_variable.c and sessionvariable.c. I\nstopped before reading the whole patch as I have some concern about the sinval\nmachanism, which ould change a bit the rest of the patch. I'm also attaching a\npatch (with .txt extension to avoid problem with the cfbot) with some comment\nupdate propositions.\n\nIn sessionvariable.c, why VariableEOXAction and VariableEOXActionCodes? Can't\nthe parser emit directly the char value, like e.g. relpersistence?\n\nextraneous returns for 2 functions:\n\n+void\n+get_session_variable_type_typmod_collid(Oid varid, Oid *typid, int32 *typmod,\n+ Oid *collid)\n+{\n[...]\n+ return;\n+}\n\n+void\n+initVariable(Variable *var, Oid varid, bool fast_only)\n+{\n[...]\n+ return;\n+}\n\nVariableCreate():\n\nMaybe add a bunch of AssertArg() for all the mandatory parametrers?\n\nAlso, the check for variable already existing should be right after the\nAssertArg(), and using SearchSysCacheExistsX().\n\nMaybe also adding an Assert(OidIsValid(xxxoid)) just after the\nCatalogTupleInsert(), similarly to some other creation functions?\n\n\nevent-triggers.sgml needs updating for the firing matrix, as session variable\nare compatible with even triggers.\n\n+typedef enum SVariableXActAction\n+{\n+ ON_COMMIT_DROP, /* used for ON COMMIT DROP */\n+ ON_COMMIT_RESET, /* used for DROP VARIABLE */\n+ RESET, /* used for ON TRANSACTION END RESET */\n+ RECHECK /* recheck if session variable is living */\n+} SVariableXActAction;\n\nThe names seem a bit generic, maybe add a prefix like SVAR_xxx?\n\nON_COMMIT_RESET is also confusing as it looks like an SQL clause. Maybe\nPERFORM_DROP or something?\n\n+static List *xact_drop_actions = NIL;\n+static List *xact_reset_actions = NIL;\n\nMaybe add a comment saying both are lists of SVariableXActAction?\n\n+typedef SVariableData * SVariable;\n\nlooks like a missing bump to typedefs.list.\n\n+char *\n+get_session_variable_name(Oid varid)\n+{\n+ HeapTuple tup;\n+ Form_pg_variable varform;\n+ char *varname;\n+\n+ tup = SearchSysCache1(VARIABLEOID, ObjectIdGetDatum(varid));\n+\n+ if (!HeapTupleIsValid(tup))\n+ elog(ERROR, \"cache lookup failed for session variable %u\", varid);\n+\n+ varform = (Form_pg_variable) GETSTRUCT(tup);\n+\n+ varname = NameStr(varform->varname);\n+\n+ ReleaseSysCache(tup);\n+\n+ return varname;\n+}\n\nThis kind of function should return a palloc'd copy of the name.\n\n+void\n+ResetSessionVariables(void)\n[...]\n+ list_free_deep(xact_drop_actions);\n+ xact_drop_actions = NIL;\n+\n+ list_free_deep(xact_reset_actions);\n+ xact_drop_actions = NIL;\n+}\n\nThe 2nd chunk should be xact_reset_actions = NIL\n\n+static void register_session_variable_xact_action(Oid varid, SVariableXActAction action);\n+static void delete_session_variable_xact_action(Oid varid, SVariableXActAction action);\n\nThe naming is a bit confusing, maybe unregister_session_cable_xact_action() for\nconsistency?\n\n+void\n+register_session_variable_xact_action(Oid varid,\n+ SVariableXActAction action)\n\nthe function is missing the static keyword.\n\nIn AtPreEOXact_SessionVariable_on_xact_actions(), those 2 instructions are\nexecuted twice (once in the middle and once at the end):\n\n\tlist_free_deep(xact_drop_actions);\n\txact_drop_actions = NIL;\n\n\n\n+ * If this entry was created during the current transaction,\n+ * creating_subid is the ID of the creating subxact; if created in a prior\n+ * transaction, creating_subid is zero.\n\nI don't see any place in the code where creating_subid can be zero? It looks\nlike it's only there for future transactional implementation, but for now this\nattribute seems unnecessary?\n\n\n\t\t/* at transaction end recheck sinvalidated variables */\n\t\tRegisterXactCallback(sync_sessionvars_xact_callback, NULL);\n\nI don't think it's ok to use xact callback for in-core code. The function\nexplicitly says:\n\n> * These functions are intended for use by dynamically loaded modules.\n> * For built-in modules we generally just hardwire the appropriate calls\n> * (mainly because it's easier to control the order that way, where needed).\n\nAlso, this function and AtPreEOXact_SessionVariable_on_xact_actions() are\nskipping all or part of the processing if there is no active transaction. Is\nthat really ok?\n\nI'm particularly sceptical about AtPreEOXact_SessionVariable_on_xact_actions\nand the RECHECK actions, as the xact_reset_actions list is reset whether the\nrecheck was done or not, so it seems to me that it could be leaking some\nentries in the hash table. If the database has a lot of object, it seems\npossible (while unlikely) that a subsequent CREATE VARIABLE can get the same\noid leading to incorrect results?\n\nIf that's somehow ok, wouldn't it be better to rearrange the code to call those\nfunctions less often, and only when they can do their work, or at least split\nthe recheck in some different function / list?\n\n+static void\n+pg_variable_cache_callback(Datum arg, int cacheid, uint32 hashvalue)\n[...]\n+ if (hashvalue != 0)\n+ {\n[...]\n+ }\n+ else\n+ sync_sessionvars_all = true;\n\nThe rechecks being somewhat expensive, I think it could be a win to remove all\npending rechecks when setting the sync_sessionvars_all.", "msg_date": "Wed, 2 Feb 2022 22:08:52 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "st 2. 2. 2022 v 15:09 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi,\n>\n> On Sun, Jan 30, 2022 at 08:09:18PM +0100, Pavel Stehule wrote:\n> >\n> > rebase after 02b8048ba5dc36238f3e7c3c58c5946220298d71\n>\n> Here are a few comments, mostly about pg_variable.c and\n> sessionvariable.c. I\n> stopped before reading the whole patch as I have some concern about the\n> sinval\n> machanism, which ould change a bit the rest of the patch. I'm also\n> attaching a\n> patch (with .txt extension to avoid problem with the cfbot) with some\n> comment\n> update propositions.\n>\n\nmerged, thank you\n\n\n>\n> In sessionvariable.c, why VariableEOXAction and VariableEOXActionCodes?\n> Can't\n> the parser emit directly the char value, like e.g. relpersistence?\n>\n>\ngood idea, it reduces some not too useful code.\n\nremoved\n\n\n\n> extraneous returns for 2 functions:\n>\n> +void\n> +get_session_variable_type_typmod_collid(Oid varid, Oid *typid, int32\n> *typmod,\n> + Oid *collid)\n> +{\n> [...]\n> + return;\n> +}\n>\n> +void\n> +initVariable(Variable *var, Oid varid, bool fast_only)\n> +{\n> [...]\n> + return;\n> +}\n>\n\nremoved, fixed\n\n\n> VariableCreate():\n>\n> Maybe add a bunch of AssertArg() for all the mandatory parametrers?\n>\n>\ndone\n\n\n\n> Also, the check for variable already existing should be right after the\n> AssertArg(), and using SearchSysCacheExistsX().\n>\n> Maybe also adding an Assert(OidIsValid(xxxoid)) just after the\n> CatalogTupleInsert(), similarly to some other creation functions?\n>\n>\n>\ndone\n\n\n> event-triggers.sgml needs updating for the firing matrix, as session\n> variable\n> are compatible with even triggers.\n>\n\ndone\n\n\n>\n> +typedef enum SVariableXActAction\n> +{\n> + ON_COMMIT_DROP, /* used for ON COMMIT DROP */\n> + ON_COMMIT_RESET, /* used for DROP VARIABLE */\n> + RESET, /* used for ON TRANSACTION END RESET */\n> + RECHECK /* recheck if session variable is living */\n> +} SVariableXActAction;\n>\n> The names seem a bit generic, maybe add a prefix like SVAR_xxx?\n>\n\ndone\n\n\n>\n> ON_COMMIT_RESET is also confusing as it looks like an SQL clause. Maybe\n> PERFORM_DROP or something?\n>\n>\nIn this case, I think so the name of this variable is accurate.\n\nsee comment\n\n<-->/*\n<--> * and if this transaction or subtransaction will be committed,\n<--> * we want to enforce variable cleaning. (we don't need to wait for\n<--> * sinval message). The cleaning action for one session variable\n<--> * can be repeated in the action list, and it doesn't do any problem\n<--> * (so we don't need to ensure uniqueness). We need separate action\n<--> * than RESET, because RESET is executed on any transaction end,\n<--> * but we want to execute cleaning only when thecurrent transaction\n<--> * will be committed.\n<--> */\n<-->register_session_variable_xact_action(varid, SVAR_ON_COMMIT_RESET);\n\n\n\n> +static List *xact_drop_actions = NIL;\n> +static List *xact_reset_actions = NIL;\n>\n> Maybe add a comment saying both are lists of SVariableXActAction?\n>\n\ndone\n\n\n>\n> +typedef SVariableData * SVariable;\n>\n> looks like a missing bump to typedefs.list.\n>\n\ndone\n\n>\n> +char *\n> +get_session_variable_name(Oid varid)\n> +{\n> + HeapTuple tup;\n> + Form_pg_variable varform;\n> + char *varname;\n> +\n> + tup = SearchSysCache1(VARIABLEOID, ObjectIdGetDatum(varid));\n> +\n> + if (!HeapTupleIsValid(tup))\n> + elog(ERROR, \"cache lookup failed for session variable %u\", varid);\n> +\n> + varform = (Form_pg_variable) GETSTRUCT(tup);\n> +\n> + varname = NameStr(varform->varname);\n> +\n> + ReleaseSysCache(tup);\n> +\n> + return varname;\n> +}\n>\n> This kind of function should return a palloc'd copy of the name.\n>\n\nfixed\n\n\n> +void\n> +ResetSessionVariables(void)\n> [...]\n> + list_free_deep(xact_drop_actions);\n> + xact_drop_actions = NIL;\n> +\n> + list_free_deep(xact_reset_actions);\n> + xact_drop_actions = NIL;\n> +}\n>\n> The 2nd chunk should be xact_reset_actions = NIL\n>\n\nfixed\n\n\n>\n> +static void register_session_variable_xact_action(Oid varid,\n> SVariableXActAction action);\n> +static void delete_session_variable_xact_action(Oid varid,\n> SVariableXActAction action);\n>\n> The naming is a bit confusing, maybe\n> unregister_session_cable_xact_action() for\n> consistency?\n>\n\nchanged\n\n\n>\n> +void\n> +register_session_variable_xact_action(Oid varid,\n> + SVariableXActAction action)\n>\n> the function is missing the static keyword.\n>\n\nfixed\n\n\n>\n> In AtPreEOXact_SessionVariable_on_xact_actions(), those 2 instructions are\n> executed twice (once in the middle and once at the end):\n>\n> list_free_deep(xact_drop_actions);\n> xact_drop_actions = NIL;\n>\n>\nfixed\n\n\n>\n>\n> + * If this entry was created during the current transaction,\n> + * creating_subid is the ID of the creating subxact; if created in a\n> prior\n> + * transaction, creating_subid is zero.\n>\n> I don't see any place in the code where creating_subid can be zero? It\n> looks\n> like it's only there for future transactional implementation, but for now\n> this\n> attribute seems unnecessary?\n>\n\nThe comment is not 100% valid. I removed the sentence about zero value of\ncreating_subid.\n\nI think so this attribute is necessary for correct behave, because these\nrelated actions lists should be always correct - you should not to drop\nvariables 2x\n\nand there are possible things like\n\nbegin;\ncreate variable xx as int on transaction end reset;\nlet xx =100;\nselect xx;\nsavepoint s1;\ndrop variable xx;\nrollback to s1;\nrollback;\n\nIn the first version I had simplified code, and I remember, there was a\nproblem when variables were modified in subtransaction or dropped, then I\ngot messages related to missing objects. Implemented code is based on an\nalready used pattern in Postgres.\n\n\n> /* at transaction end recheck sinvalidated variables */\n> RegisterXactCallback(sync_sessionvars_xact_callback, NULL);\n>\n> I don't think it's ok to use xact callback for in-core code. The function\n> explicitly says:\n>\n> > * These functions are intended for use by dynamically loaded modules.\n> > * For built-in modules we generally just hardwire the appropriate calls\n> > * (mainly because it's easier to control the order that way, where\n> needed).\n>\n\nIt was a serious issue - after checking, I removed all related code. The\nsinval handler is called without hash only after ANALYZE command. In this\ncase, we don't need to run any action.\n\n\n> Also, this function and AtPreEOXact_SessionVariable_on_xact_actions() are\n> skipping all or part of the processing if there is no active transaction.\n> Is\n> that really ok?\n>\n\nThis part was +/- ok, although I can use just isCommit, but there was a\nbug. I cannot clean xact_reset_actions every time. It can be done just when\nisCommit. I fixed this issue\nFixed memory leaks there.\n\n\n>\n> I'm particularly sceptical about\n> AtPreEOXact_SessionVariable_on_xact_actions\n> and the RECHECK actions, as the xact_reset_actions list is reset whether\n> the\n> recheck was done or not, so it seems to me that it could be leaking some\n> entries in the hash table. If the database has a lot of object, it seems\n> possible (while unlikely) that a subsequent CREATE VARIABLE can get the\n> same\n> oid leading to incorrect results?\n>\n>\nit was buggy, I fixed it\n\n\n> If that's somehow ok, wouldn't it be better to rearrange the code to call\n> those\n> functions less often, and only when they can do their work, or at least\n> split\n> the recheck in some different function / list?\n>\n> +static void\n> +pg_variable_cache_callback(Datum arg, int cacheid, uint32 hashvalue)\n> [...]\n> + if (hashvalue != 0)\n> + {\n> [...]\n> + }\n> + else\n> + sync_sessionvars_all = true;\n>\n> The rechecks being somewhat expensive, I think it could be a win to remove\n> all\n> pending rechecks when setting the sync_sessionvars_all.\n>\n\nI removed it\n\nI am sending an updated and rebased patch.\n\nRegards\n\nPavel", "msg_date": "Tue, 1 Mar 2022 05:50:45 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\n\n>> + * If this entry was created during the current transaction,\n>> + * creating_subid is the ID of the creating subxact; if created in a\n>> prior\n>> + * transaction, creating_subid is zero.\n>>\n>> I don't see any place in the code where creating_subid can be zero? It\n>> looks\n>> like it's only there for future transactional implementation, but for now\n>> this\n>> attribute seems unnecessary?\n>>\n>\n> The comment is not 100% valid. I removed the sentence about zero value of\n> creating_subid.\n>\n\nI lost commit with this change. I am sending updated patch.\n\nRegards\n\nPavel", "msg_date": "Wed, 2 Mar 2022 06:03:06 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nOn Wed, Mar 02, 2022 at 06:03:06AM +0100, Pavel Stehule wrote:\n> \n> I lost commit with this change. I am sending updated patch.\n\nThanks a lot Pavel!\n\nI did a more thorough review of the patch. I'm attaching a diff (in .txt\nextension) for comment improvement suggestions. I may have misunderstood\nthings so feel free to discard some of it. I will mention the comment I didn't\nunderstand in this mail.\n\nFirst, I spotted some problem in the invalidation logic.\n\n+ * Assign sinval mark to session variable. This mark probably\n+ * signalized, so the session variable was dropped. But this\n+ * should be rechecked later against system catalog.\n+ */\n+static void\n+pg_variable_cache_callback(Datum arg, int cacheid, uint32 hashvalue)\n\nYou mention that hashvalue can only be zero for commands that can't\naffect session variables (like VACUUM or ANALYZE), but that's not true. It can\nalso happen in case of sinval queue overflow (see InvalidateSystemCaches()).\nSo in that case we should trigger a full recheck, with some heuristics on how\nto detect that a cached variable is still valid. Unfortunately the oid can\nwraparound so some other check is needed to make it safe.\n\nAlso, even if we get a non-zero hashvalue in the inval callback, we can't\nassume that there weren't any collision in the hash. So the additional check\nshould be used there too.\n\nWe had a long off-line discussion about this with Pavel yesterday on what\nheuristic to use there. Unlike other caches where discarding an entry when it\nshouldn't have been is not really problematic, the cache here contains the real\nvariable value so we can't discard it unless the variable was really dropped.\nIt should be possible to make it work, so I will let Pavel comment on which\napproach he wants to use and what the drawbacks are. I guess that this will be\nthe most critical part of this patch to decide whether the approach is\nacceptable or not.\n\n\nThe rest is only minor stylistic comments.\n\nUsing -DRAW_EXPRESSION_COVERAGE_TEST I see that T_LetStmt is missing in\nraw_expression_tree_walker.\n\nALTER and DROP both suggest \"IMMUTABLE VARIABLE\" as valid completion, while\nit should only be usable in the CREATE [ IMMUTABLE ] VARIABLE form.\n\n+initVariable(Variable *var, Oid varid, bool fast_only)\n+{\n+ var->collation = varform->varcollation;\n+ var->eoxaction = varform->vareoxaction;\n+ var->is_not_null = varform->varisnotnull;\n+ var->is_immutable = varform->varisimmutable;\n\nnit: eoxaction is defined after is_not_null and is_immutable, it would be\nbetter to keep the initialization order consistent (same in VariableCreate).\n\n+ values[Anum_pg_variable_varcollation - 1] = ObjectIdGetDatum((char) varCollation);\n+ values[Anum_pg_variable_vareoxaction - 1] = CharGetDatum(eoxaction);\n\nseems like the char cast is on the wrong variable?\n\n+ * [...] We have to hold two separate action lists:\n+ * one for dropping the session variable from system catalog, and\n+ * another one for resetting its value. Both are necessary, since\n+ * dropping a session variable also needs to enforce a reset of\n+ * the value.\n\nI don't fully understand that comment. Maybe you meant that the opposite isn't\ntrue, ie. highlight that a reset should *not* drop the variable thus two lists?\n\n+typedef enum SVariableXActAction\n+{\n+ SVAR_ON_COMMIT_DROP, /* used for ON COMMIT DROP */\n+ SVAR_ON_COMMIT_RESET, /* used for DROP VARIABLE */\n+ SVAR_RESET, /* used for ON TRANSACTION END RESET */\n+ SVAR_RECHECK /* verify if session variable still exists */\n+} SVariableXActAction;\n+\n+typedef struct SVariableXActActionItem\n+{\n+ Oid varid; /* varid of session variable */\n+ SVariableXActAction action; /* reset or drop */\n\nthe stored action isn't simply \"reset or drop\", even though the resulting\naction will be a reset or a drop (or a no-op) right? Since it's storing a enum\ndefine just before, I'd just drop the comment on action, and maybe specify that\nSVAR_RECHECK will do appropriate cleanup if the session variable doesn't exist.\n\n\n+ * Release the variable defined by varid from sessionvars\n+ * hashtab.\n+ */\n+static void\n+free_session_variable(SVariable svar)\n\nThe function name is a bit confusing given the previous function. Maybe this\none should be called forget_session_variable() instead, or something like that?\n\nI think the function comment should also mention that caller is responsible for\nmaking sure that the sessionvars htab exists before calling it, for extra\nclarity, or just add an assert for that.\n\n+static void\n+free_session_variable_varid(Oid varid)\n\nSimilary, maybe renaming this function forget_session_variable_by_id()?\n\n+static void\n+create_sessionvars_hashtable(void)\n+{\n+ HASHCTL ctl;\n+\n+ /* set callbacks */\n+ if (first_time)\n+ {\n+ /* Read sinval messages */\n+ CacheRegisterSyscacheCallback(VARIABLEOID,\n+ pg_variable_cache_callback,\n+ (Datum) 0);\n+\n+ first_time = false;\n+ }\n+\n+ /* needs its own long lived memory context */\n+ if (SVariableMemoryContext == NULL)\n+ {\n+ SVariableMemoryContext =\n+ AllocSetContextCreate(TopMemoryContext,\n+ \"session variables\",\n+ ALLOCSET_START_SMALL_SIZES);\n+ }\n\nAs far as I can see the SVariableMemoryContext can be reset but never set to\nNULL, so I think the initialization can be done in the first_time case, and\notherwise asserted that it's not NULL.\n\n+ if (!isnull && svar->typid != typid)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_DATATYPE_MISMATCH),\n+ errmsg(\"type \\\"%s\\\" of assigned value is different than type \\\"%s\\\" of session variable \\\"%s.%\n\nWhy testing isnull? I don't think it's ok to allow NULL::text in an int\nvariable for instance. This isn't valid in other context (like inserting in a\ntable)\n\n+ * result of default expression always). Don't do this check, when variable\n+ * is initialized.\n+ */\n+ if (!init_mode &&\n\nI think the last part of the comment is a bit misleading. Maybe \"when variable\nis being initialized\" (and similary same for the function comment).\n\n+ * We try not to break the previous value, if something is wrong.\n+ *\n+ * As side efect this function acquires AccessShareLock on\n+ * related session variable until commit.\n+ */\n+void\n+SetSessionVariable(Oid varid, Datum value, bool isNull, Oid typid)\n\nI don't understand what you mean by \"We try not to break the previous value, if\nsomething is wrong\".\n\n+ /* Initialize svar when not initialized or when stored value is null */\n+ if (!found)\n+ {\n+ Variable var;\n+\n+ /* don't need defexpr and acl here */\n+ initVariable(&var, varid, true);\n+ init_session_variable(svar, &var);\n+ }\n+\n+ set_session_variable(svar, value, isNull, typid, false);\n\nShouldn't the comment be on the set_session_variable() vall rather than on the\n!found block?\n\n+ * Returns the value of the session variable specified by varid. Check correct\n+ * result type. Optionally the result can be copied.\n+ */\n+Datum\n+GetSessionVariable(Oid varid, bool *isNull, Oid expected_typid, bool copy)\n\nAll callers use copy == true, couldn't we get rid of it and say it returns a\ncopy of the value if any?\n\n+ * Create new ON_COMMIT_DROP xact action. We have to drop\n+ * ON COMMIT DROP variable, although this variable should not\n+ * be used. So we need to register this action in CREATE VARIABLE\n+ * time.\n\nI don't understand this comment.\n\n+AtPreEOXact_SessionVariable_on_xact_actions(bool isCommit)\n+{\n+ ListCell *l;\n+\n+ foreach(l, xact_drop_actions)\n+ {\n+ SVariableXActActionItem *xact_ai =\n+ (SVariableXActActionItem *) lfirst(l);\n+\n+ /* Iterate only over non dropped entries */\n+ if (xact_ai->deleting_subid == InvalidSubTransactionId)\n+ {\n+ Assert(xact_ai->action == SVAR_ON_COMMIT_DROP);\n\nThe assert sould probably be in the block above.\n\n+ * We want to reset session variable (release it from\n+ * local memory) when RESET is required or when session\n+ * variable was removed explicitly (DROP VARIABLE) or\n+ * implicitly (ON COMMIT DROP). Explicit releasing should\n+ * be done only if the transaction is commited.\n+ */\n+ if ((xact_ai->action == SVAR_RESET) ||\n+ (xact_ai->action == SVAR_ON_COMMIT_RESET &&\n+ xact_ai->deleting_subid == InvalidSubTransactionId &&\n+ isCommit))\n+ free_session_variable_varid(xact_ai->varid);\n\nThis chunk is a bit hard to follow. Also, for SVAR_RESET wouldn't it be better\nto only make the svar invalid and keep it in the htab? If so, this could be\nsplit in two different branches which would be easier to follow.\n\n+ if (!isCommit &&\n+ xact_ai->creating_subid == mySubid &&\n+ xact_ai->action != SVAR_RESET &&\n+ xact_ai->action != SVAR_RECHECK)\n+ {\n+ /* cur_item must be removed */\n+ xact_reset_actions = foreach_delete_current(xact_reset_actions, cur_item);\n+ pfree(xact_ai);\n\nI think that be definition only the SVAR_ON_COMMIT_DROP (cleaning entry for a\ndropped session variable) will ever need to be removed there, so we should\ncheck for that instead of not being something else?\n\n\n+ /*\n+ * Prepare session variables, if not prepared in queryDesc\n+ */\n+ if (queryDesc->num_session_variables > 0)\n\nI don't understand that comment.\n\n+static void\n+svariableStartupReceiver(DestReceiver *self, int operation, TupleDesc typeinfo)\n+{\n+ svariableState *myState = (svariableState *) self;\n+ int natts = typeinfo->natts;\n+ int outcols = 0;\n+ int i;\n+\n+ for (i = 0; i < natts; i++)\n+ {\n+ Form_pg_attribute attr = TupleDescAttr(typeinfo, i);\n+\n+ if (attr->attisdropped)\n+ continue;\n+\n+ if (++outcols > 1)\n+ elog(ERROR, \"svariable DestReceiver can take only one attribute\");\n+\n+ myState->typid = attr->atttypid;\n+ myState->typmod = attr->atttypmod;\n+ myState->typlen = attr->attlen;\n+ myState->slot_offset = i;\n+ }\n+\n+ myState->rows = 0;\n+}\n\nMaybe add an initial Assert to make sure that caller did call\nSetVariableDestReceiverParams(), and final check that one attribute was found?\n\n@@ -1794,15 +1840,39 @@ fix_expr_common(PlannerInfo *root, Node *node)\n g->cols = cols;\n }\n }\n+ else if (IsA(node, Param))\n+ {\n+ Param *p = (Param *) node;\n+\n+ if (p->paramkind == PARAM_VARIABLE)\n+ {\n+ PlanInvalItem *inval_item = makeNode(PlanInvalItem);\n+\n+ /* paramid is still session variable id */\n+ inval_item->cacheId = VARIABLEOID;\n+ inval_item->hashValue = GetSysCacheHashValue1(VARIABLEOID,\n+ ObjectIdGetDatum(p->paramvarid));\n+\n+ /* Append this variable to global, register dependency */\n+ root->glob->invalItems = lappend(root->glob->invalItems,\n+ inval_item);\n+ }\n+ }\n\nI didn't see any test covering invalidation of cached plan using session\nvariables. Could you add some? While at it, maybe use different values on the\nsesssion_variable.sql tests rather than 100 in many places, so it's easier to\nidentifier which case broke in case of problem.\n\n+static Node *\n+makeParamSessionVariable(ParseState *pstate,\n+ Oid varid, Oid typid, int32 typmod, Oid collid,\n+ char *attrname, int location)\n+{\n[...]\n+ /*\n+ * There are two ways to access session variables - direct, used by simple\n+ * plpgsql expressions, where it is not necessary to emulate stability.\n+ * And Buffered access, which is used everywhere else. We should ensure\n+ * stable values, and because session variables are global, then we should\n+ * work with copied values instead of directly accessing variables. For\n+ * direct access, the varid is best. For buffered access, we need\n+ * to assign an index to the buffer - later, when we know what variables are\n+ * used. Now, we just remember, so we use session variables.\n\nI don't understand the last part, starting with \"For buffered access, we\nneed...\". Also, the beginning of the comment seems like something more general\nand may be moved somewhere, maybe at the beginning of sessionvariable.c?\n\n+ * stmt->query is SelectStmt node. An tranformation of\n+ * this node doesn't support SetToDefault node. Instead injecting\n+ * of transformSelectStmt or parse state, we can directly\n+ * transform target list here if holds SetToDefault node.\n+ */\n+ if (stmt->set_default)\n\nI don't understand this comment. Especially since the next\ntransformTargetList() will emit SetToDefault node that will be handled later in\nthat function and then in RewriteQuery.\n\n+ /*\n+ * rewrite SetToDefaults needs varid in Query structure\n+ */\n+ query->resultVariable = varid;\n\nI also don't understand that comment. Is is always set just in case there's a\nSetToDefault, or something else?\n\n+ /* translate paramvarid to session variable name */\n+ if (param->paramkind == PARAM_VARIABLE)\n+ {\n+ appendStringInfo(context->buf, \"%s\",\n+ generate_session_variable_name(param->paramvarid));\n+ return;\n+ }\n\nA bit more work seems to be needed for deparsing session variables:\n\n# create variable myvar text;\nCREATE VARIABLE\n\n# create view myview as select myvar;\nCREATE VIEW\n\n# \\d+ myview\n View \"public.myview\"\n Column | Type | Collation | Nullable | Default | Storage | Description\n--------+------+-----------+----------+---------+----------+-------------\n myvar | text | | | | extended |\nView definition:\n SELECT myvar AS myvar;\n\nThere shouldn't be an explicit alias I think.", "msg_date": "Thu, 3 Mar 2022 15:06:52 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Thu, Mar 03, 2022 at 03:06:52PM +0800, Julien Rouhaud wrote:\n> Hi,\n>\n> On Wed, Mar 02, 2022 at 06:03:06AM +0100, Pavel Stehule wrote:\n> >\n> > I lost commit with this change. I am sending updated patch.\n\nAlso, another thing is the size of the patch. It's probably the minimum to\nhave a consistent working implementation, but maybe we can still split it to\nmake review easier?\n\nFor instance, maybe having:\n\n- the pg_variable part on its own, without a way to use them, maybe with\n syscache helpers\n- the main session variable implementation and test coverage\n- plpgsql support and test coverage\n- pg_dump support and test coverage\n\nIt wouldn't make the main patch that small but could still help quite a bit.\n\nAny better suggestion?\n\n\n", "msg_date": "Thu, 3 Mar 2022 15:16:26 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Thu, Mar 03, 2022 at 03:06:52PM +0800, Julien Rouhaud wrote:\n> Hi,\n> \n> On Wed, Mar 02, 2022 at 06:03:06AM +0100, Pavel Stehule wrote:\n> > \n> > I lost commit with this change. I am sending updated patch.\n> \n> Thanks a lot Pavel!\n> \n> I did a more thorough review of the patch. I'm attaching a diff (in .txt\n> extension) for comment improvement suggestions. I may have misunderstood\n\nBut the attachment actually was a *.patch, so cfbot tried and failed to apply\nit.", "msg_date": "Sat, 19 Mar 2022 16:46:13 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Sat, Mar 19, 2022 at 04:46:13PM -0500, Justin Pryzby wrote:\n> On Thu, Mar 03, 2022 at 03:06:52PM +0800, Julien Rouhaud wrote:\n> > Hi,\n> > \n> > On Wed, Mar 02, 2022 at 06:03:06AM +0100, Pavel Stehule wrote:\n> > > \n> > > I lost commit with this change. I am sending updated patch.\n> > \n> > Thanks a lot Pavel!\n> > \n> > I did a more thorough review of the patch. I'm attaching a diff (in .txt\n> > extension) for comment improvement suggestions. I may have misunderstood\n> \n> But the attachment actually was a *.patch, so cfbot tried and failed to apply\n> it.\n\nArgh, I indeed failed to rename the patch. Thanks!\n\n\n", "msg_date": "Sun, 20 Mar 2022 11:56:20 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\n\nA bit more work seems to be needed for deparsing session variables:\n>\n> # create variable myvar text;\n> CREATE VARIABLE\n>\n> # create view myview as select myvar;\n> CREATE VIEW\n>\n> # \\d+ myview\n> View \"public.myview\"\n> Column | Type | Collation | Nullable | Default | Storage | Description\n> --------+------+-----------+----------+---------+----------+-------------\n> myvar | text | | | | extended |\n> View definition:\n> SELECT myvar AS myvar;\n>\n> There shouldn't be an explicit alias I think.\n>\n\nI check this issue, and I afraid so it is not fixable. The target list\nentry related to session variable has not some magic value like ?column?\nthat can be used for check if tle->resname is implicit or explicit\n\nAnd in this time I cannot to use FigureColname because it doesn't work with\ntransformed nodes. More - the Param node can be nested in SubscriptingRef\nor FieldSelect. It doesn't work perfectly now. See following example:\n\ncreate type xt as (a int, b int);\ncreate view b as select (10, ((random()*100)::int)::xt).b;\n\\d+ b\nSELECT (ROW(10, (random() * 100::double precision)::integer)::xt).b AS b;\n\nRegards\n\nPavel\n\nHi\nA bit more work seems to be needed for deparsing session variables:\n\n# create variable myvar text;\nCREATE VARIABLE\n\n# create view myview as select myvar;\nCREATE VIEW\n\n# \\d+ myview\n                          View \"public.myview\"\n Column | Type | Collation | Nullable | Default | Storage  | Description\n--------+------+-----------+----------+---------+----------+-------------\n myvar  | text |           |          |         | extended |\nView definition:\n SELECT myvar AS myvar;\n\nThere shouldn't be an explicit alias I think.I check this issue, and I afraid so it is not fixable. The target list entry related to session variable has not some magic value like ?column? that can be used for check if tle->resname is implicit or explicitAnd in this time I cannot to use FigureColname because it doesn't work with transformed nodes. More - the Param node can be nested in SubscriptingRef or FieldSelect. It doesn't work perfectly now. See following example:create type xt as (a int, b int);create view b as select (10, ((random()*100)::int)::xt).b;\\d+ bSELECT (ROW(10, (random() * 100::double precision)::integer)::xt).b AS b;RegardsPavel", "msg_date": "Wed, 23 Mar 2022 21:58:59 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nOn Wed, Mar 23, 2022 at 09:58:59PM +0100, Pavel Stehule wrote:\n> \n> A bit more work seems to be needed for deparsing session variables:\n> >\n> > # create variable myvar text;\n> > CREATE VARIABLE\n> >\n> > # create view myview as select myvar;\n> > CREATE VIEW\n> >\n> > # \\d+ myview\n> > View \"public.myview\"\n> > Column | Type | Collation | Nullable | Default | Storage | Description\n> > --------+------+-----------+----------+---------+----------+-------------\n> > myvar | text | | | | extended |\n> > View definition:\n> > SELECT myvar AS myvar;\n> >\n> > There shouldn't be an explicit alias I think.\n> >\n> \n> I check this issue, and I afraid so it is not fixable. The target list\n> entry related to session variable has not some magic value like ?column?\n> that can be used for check if tle->resname is implicit or explicit\n> \n> And in this time I cannot to use FigureColname because it doesn't work with\n> transformed nodes. More - the Param node can be nested in SubscriptingRef\n> or FieldSelect. It doesn't work perfectly now. See following example:\n> \n> create type xt as (a int, b int);\n> create view b as select (10, ((random()*100)::int)::xt).b;\n> \\d+ b\n> SELECT (ROW(10, (random() * 100::double precision)::integer)::xt).b AS b;\n\nFair enough. Since there is other code that already behaves the same I agree\nthat it's better to not add special cases in ruleutils.c and have an explicit\nalias in the deparsed view, which isn't incorrect.\n\n\n", "msg_date": "Fri, 25 Mar 2022 12:18:42 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nčt 3. 3. 2022 v 8:16 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> On Thu, Mar 03, 2022 at 03:06:52PM +0800, Julien Rouhaud wrote:\n> > Hi,\n> >\n> > On Wed, Mar 02, 2022 at 06:03:06AM +0100, Pavel Stehule wrote:\n> > >\n> > > I lost commit with this change. I am sending updated patch.\n>\n> Also, another thing is the size of the patch. It's probably the minimum to\n> have a consistent working implementation, but maybe we can still split it\n> to\n> make review easier?\n>\n> For instance, maybe having:\n>\n> - the pg_variable part on its own, without a way to use them, maybe with\n> syscache helpers\n> - the main session variable implementation and test coverage\n> - plpgsql support and test coverage\n> - pg_dump support and test coverage\n>\n> It wouldn't make the main patch that small but could still help quite a\n> bit.\n>\n> Any better suggestion?\n>\n\nI am sending fresh rebased patch + separation to more patches. This split\nis initial, and can be changed later\n\nRegards\n\nPavel", "msg_date": "Sun, 10 Apr 2022 20:30:39 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Sun, Apr 10, 2022 at 08:30:39PM +0200, Pavel Stehule wrote:\n> I am sending fresh rebased patch + separation to more patches. This split\n> is initial, and can be changed later\n\nThe 0001 patch requires this, but it's not included until 0003.\nsrc/include/commands/session_variable.h\n\nEach patch should compile and pass tests with the preceding patches, without\nthe following patches. I think the regression tests should be included with\ntheir corresponding patch. Maybe it's ok to separate out the changes for\npg_dump, docs, and psql - but they'd have to be merged together eventually.\nI realize some of this runs counter to Julien's suggestion to split patches.\n\nThe version should be changed:\n+ if (fout->remoteVersion < 150000)\n\nI enabled these, which causes the regression tests fail:\n\n+#define COPY_PARSE_PLAN_TREES\n+#define WRITE_READ_PARSE_PLAN_TREES\n+#define RAW_EXPRESSION_COVERAGE_TEST\n\n/home/pryzbyj/src/postgres/src/test/regress/results/session_variables.out 2022-04-10 15:37:32.926306124 -0500\n@@ -16,7 +16,7 @@\n SET ROLE TO var_test_role;\n -- should fail\n LET var1 = 10;\n-ERROR: permission denied for session variable var1\n+ERROR: unrecognized node type: 368\n...\n\n\n", "msg_date": "Sun, 10 Apr 2022 15:43:33 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15+1" }, { "msg_contents": "Hi,\n\nOn Sun, Apr 10, 2022 at 03:43:33PM -0500, Justin Pryzby wrote:\n> On Sun, Apr 10, 2022 at 08:30:39PM +0200, Pavel Stehule wrote:\n> > I am sending fresh rebased patch + separation to more patches. This split\n> > is initial, and can be changed later\n> \n> The 0001 patch requires this, but it's not included until 0003.\n> src/include/commands/session_variable.h\n> \n> Each patch should compile and pass tests with the preceding patches, without\n> the following patches. I think the regression tests should be included with\n> their corresponding patch. Maybe it's ok to separate out the changes for\n> pg_dump, docs, and psql - but they'd have to be merged together eventually.\n> I realize some of this runs counter to Julien's suggestion to split patches.\n\nNote that most of my suggestions were only to make the patch easier to review,\nwhich was mostly trying to limit a bit the core of the new code.\n\nUnfortunately, given the feature we can't really split the patch in many and\nsmaller parts and expect them to be completely self contained, so I'm not\nagainst splitting smaller chunks like psql support and whatnot. But I'm not\nconvinced that it will make it easier to review.\n\n\n", "msg_date": "Mon, 11 Apr 2022 23:34:32 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15+1" }, { "msg_contents": "ne 10. 4. 2022 v 22:43 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Sun, Apr 10, 2022 at 08:30:39PM +0200, Pavel Stehule wrote:\n> > I am sending fresh rebased patch + separation to more patches. This split\n> > is initial, and can be changed later\n>\n> The 0001 patch requires this, but it's not included until 0003.\n> src/include/commands/session_variable.h\n>\n> Each patch should compile and pass tests with the preceding patches,\n> without\n> the following patches. I think the regression tests should be included\n> with\n> their corresponding patch. Maybe it's ok to separate out the changes for\n> pg_dump, docs, and psql - but they'd have to be merged together eventually.\n> I realize some of this runs counter to Julien's suggestion to split\n> patches.\n>\n\nfixed\n\n\n>\n> The version should be changed:\n> + if (fout->remoteVersion < 150000)\n>\n\ncurrently, there is not branch for PostgreSQL 16, but I'll fix it, when new\ndevel branch will be created\n\n\n>\n> I enabled these, which causes the regression tests fail:\n>\n> +#define COPY_PARSE_PLAN_TREES\n> +#define WRITE_READ_PARSE_PLAN_TREES\n> +#define RAW_EXPRESSION_COVERAGE_TEST\n>\n> /home/pryzbyj/src/postgres/src/test/regress/results/session_variables.out\n> 2022-04-10 15:37:32.926306124 -0500\n> @@ -16,7 +16,7 @@\n> SET ROLE TO var_test_role;\n> -- should fail\n> LET var1 = 10;\n> -ERROR: permission denied for session variable var1\n> +ERROR: unrecognized node type: 368\n> ...\n>\n\nfixed\n\nI can divide regress tests, but in reality, this is just one feature, and\nit is hard to separate. Regress tests need the first 4 patches to be\npossible to test something useful.\n\nRegards\n\nPavel", "msg_date": "Tue, 12 Apr 2022 07:00:33 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15+1" }, { "msg_contents": "čt 3. 3. 2022 v 8:06 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi,\n>\n> On Wed, Mar 02, 2022 at 06:03:06AM +0100, Pavel Stehule wrote:\n> >\n> > I lost commit with this change. I am sending updated patch.\n>\n> Thanks a lot Pavel!\n>\n> I did a more thorough review of the patch. I'm attaching a diff (in .txt\n> extension) for comment improvement suggestions. I may have misunderstood\n> things so feel free to discard some of it. I will mention the comment I\n> didn't\n> understand in this mail.\n>\n> First, I spotted some problem in the invalidation logic.\n>\n> + * Assign sinval mark to session variable. This mark probably\n> + * signalized, so the session variable was dropped. But this\n> + * should be rechecked later against system catalog.\n> + */\n> +static void\n> +pg_variable_cache_callback(Datum arg, int cacheid, uint32 hashvalue)\n>\n> You mention that hashvalue can only be zero for commands that can't\n> affect session variables (like VACUUM or ANALYZE), but that's not true.\n> It can\n> also happen in case of sinval queue overflow (see\n> InvalidateSystemCaches()).\n> So in that case we should trigger a full recheck, with some heuristics on\n> how\n> to detect that a cached variable is still valid. Unfortunately the oid can\n> wraparound so some other check is needed to make it safe.\n>\n> Also, even if we get a non-zero hashvalue in the inval callback, we can't\n> assume that there weren't any collision in the hash. So the additional\n> check\n> should be used there too.\n>\n> We had a long off-line discussion about this with Pavel yesterday on what\n> heuristic to use there. Unlike other caches where discarding an entry\n> when it\n> shouldn't have been is not really problematic, the cache here contains the\n> real\n> variable value so we can't discard it unless the variable was really\n> dropped.\n> It should be possible to make it work, so I will let Pavel comment on which\n> approach he wants to use and what the drawbacks are. I guess that this\n> will be\n> the most critical part of this patch to decide whether the approach is\n> acceptable or not.\n>\n\nI thought more about this issue, and I think it is solvable, although\ndifferently (little bit than we talked about). The check based on oid and\nxmin should not be enough for consistency check, because xmin can be\nquickly lost when a user executes VACUUM FREEZE or VACUUM FULL.\n\nThe consistency of a stored session variable should be checked always when\nthe session variable is used (for reading) the first time in a\ntransaction. When value is created and used in the same transaction, then\nthe consistency check is not necessary. When consistency check fails, then\nstored value is marked as broken and cannot be read. Can be overwritten.\n\nWe can believe that session variables based on buildin types are always\nconsistent.\n\nComposite types should be checked recursively from top to buildin types. It\nmeans we should hold tupledescs for all nested composites. Initially the\ncheck can be very strict.\n\nLast case is consistency check for types owned by some extensions. For this\ncase we can accept the version number of related extensions. Without change\nwe can believe so the stored binary data are consistent.\n\n\n>\n> The rest is only minor stylistic comments.\n>\n> Using -DRAW_EXPRESSION_COVERAGE_TEST I see that T_LetStmt is missing in\n> raw_expression_tree_walker.\n>\n\nfixed\n\n\n>\n> ALTER and DROP both suggest \"IMMUTABLE VARIABLE\" as valid completion, while\n> it should only be usable in the CREATE [ IMMUTABLE ] VARIABLE form.\n>\n\nfixed\n\n\n>\n> +initVariable(Variable *var, Oid varid, bool fast_only)\n> +{\n> + var->collation = varform->varcollation;\n> + var->eoxaction = varform->vareoxaction;\n> + var->is_not_null = varform->varisnotnull;\n> + var->is_immutable = varform->varisimmutable;\n>\n> nit: eoxaction is defined after is_not_null and is_immutable, it would be\n> better to keep the initialization order consistent (same in\n> VariableCreate).\n>\n\nfixed\n\n\n>\n> + values[Anum_pg_variable_varcollation - 1] = ObjectIdGetDatum((char)\n> varCollation);\n> + values[Anum_pg_variable_vareoxaction - 1] = CharGetDatum(eoxaction);\n>\n> seems like the char cast is on the wrong variable?\n>\n\nfixed\n\n\n>\n> + * [...] We have to hold two separate action lists:\n> + * one for dropping the session variable from system catalog, and\n> + * another one for resetting its value. Both are necessary, since\n> + * dropping a session variable also needs to enforce a reset of\n> + * the value.\n>\n> I don't fully understand that comment. Maybe you meant that the opposite\n> isn't\n> true, ie. highlight that a reset should *not* drop the variable thus two\n> lists?\n>\n\nI tried to describe the issue in the comment. When I have just one action\nlist, then I had a problem with impossibility to extend this list about\nreset action enforced by drop variable when I iterated over this list in\nxact time. This issue was solved by using two lists - one for drop and\nsecond for reset and recheck.\n\n\n>\n> +typedef enum SVariableXActAction\n> +{\n> + SVAR_ON_COMMIT_DROP, /* used for ON COMMIT DROP */\n> + SVAR_ON_COMMIT_RESET, /* used for DROP VARIABLE */\n> + SVAR_RESET, /* used for ON TRANSACTION END RESET */\n> + SVAR_RECHECK /* verify if session variable still exists\n> */\n> +} SVariableXActAction;\n> +\n> +typedef struct SVariableXActActionItem\n> +{\n> + Oid varid; /* varid of session variable */\n> + SVariableXActAction action; /* reset or drop */\n>\n> the stored action isn't simply \"reset or drop\", even though the resulting\n> action will be a reset or a drop (or a no-op) right? Since it's storing a\n> enum\n> define just before, I'd just drop the comment on action, and maybe specify\n> that\n> SVAR_RECHECK will do appropriate cleanup if the session variable doesn't\n> exist.\n>\n> done\n\n\n>\n> + * Release the variable defined by varid from sessionvars\n> + * hashtab.\n> + */\n> +static void\n> +free_session_variable(SVariable svar)\n>\n> The function name is a bit confusing given the previous function. Maybe\n> this\n> one should be called forget_session_variable() instead, or something like\n> that?\n>\n> I think the function comment should also mention that caller is\n> responsible for\n> making sure that the sessionvars htab exists before calling it, for extra\n> clarity, or just add an assert for that.\n>\n> +static void\n> +free_session_variable_varid(Oid varid)\n>\n> Similary, maybe renaming this function forget_session_variable_by_id()?\n>\n\nI don't like \"forget\" too much - maybe \"remove\" can be used instead - like\nHASH_REMOVE\n\n\n> +static void\n> +create_sessionvars_hashtable(void)\n> +{\n> + HASHCTL ctl;\n> +\n> + /* set callbacks */\n> + if (first_time)\n> + {\n> + /* Read sinval messages */\n> + CacheRegisterSyscacheCallback(VARIABLEOID,\n> + pg_variable_cache_callback,\n> + (Datum) 0);\n> +\n> + first_time = false;\n> + }\n> +\n> + /* needs its own long lived memory context */\n> + if (SVariableMemoryContext == NULL)\n> + {\n> + SVariableMemoryContext =\n> + AllocSetContextCreate(TopMemoryContext,\n> + \"session variables\",\n> + ALLOCSET_START_SMALL_SIZES);\n> + }\n>\n> As far as I can see the SVariableMemoryContext can be reset but never set\n> to\n> NULL, so I think the initialization can be done in the first_time case, and\n> otherwise asserted that it's not NULL.\n>\n\ndone\n\n\n>\n> + if (!isnull && svar->typid != typid)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_DATATYPE_MISMATCH),\n> + errmsg(\"type \\\"%s\\\" of assigned value is different than\n> type \\\"%s\\\" of session variable \\\"%s.%\n>\n> Why testing isnull? I don't think it's ok to allow NULL::text in an int\n> variable for instance. This isn't valid in other context (like inserting\n> in a\n> table)\n>\n\nchanged\n\n\n>\n> + * result of default expression always). Don't do this check, when\n> variable\n> + * is initialized.\n> + */\n> + if (!init_mode &&\n>\n> I think the last part of the comment is a bit misleading. Maybe \"when\n> variable\n> is being initialized\" (and similary same for the function comment).\n>\n\nchanged\n\n\n>\n> + * We try not to break the previous value, if something is wrong.\n> + *\n> + * As side efect this function acquires AccessShareLock on\n> + * related session variable until commit.\n> + */\n> +void\n> +SetSessionVariable(Oid varid, Datum value, bool isNull, Oid typid)\n>\n> I don't understand what you mean by \"We try not to break the previous\n> value, if\n> something is wrong\".\n>\n\nThat means, so SetSessionVariable sets a new value or should preserve the\noriginal value.\n\n\n>\n> + /* Initialize svar when not initialized or when stored value is null */\n> + if (!found)\n> + {\n> + Variable var;\n> +\n> + /* don't need defexpr and acl here */\n> + initVariable(&var, varid, true);\n> + init_session_variable(svar, &var);\n> + }\n> +\n> + set_session_variable(svar, value, isNull, typid, false);\n>\n> Shouldn't the comment be on the set_session_variable() vall rather than on\n> the\n> !found block?\n>\n\nThis comment is obsolete,\n\nremoved\n\n\n>\n> + * Returns the value of the session variable specified by varid. Check\n> correct\n> + * result type. Optionally the result can be copied.\n> + */\n> +Datum\n> +GetSessionVariable(Oid varid, bool *isNull, Oid expected_typid, bool copy)\n>\n> All callers use copy == true, couldn't we get rid of it and say it returns\n> a\n> copy of the value if any?\n>\n\nI replaced it with the new function CopySessionVariableWithTypeCheck.\nProbably in almost all situations, the copy will be required. And if not,\nwe can enhance the API later.\n\n\n> + * Create new ON_COMMIT_DROP xact action. We have to drop\n> + * ON COMMIT DROP variable, although this variable should not\n> + * be used. So we need to register this action in CREATE VARIABLE\n> + * time.\n>\n> I don't understand this comment.\n>\n\nchanged\n\n\n>\n> +AtPreEOXact_SessionVariable_on_xact_actions(bool isCommit)\n> +{\n> + ListCell *l;\n> +\n> + foreach(l, xact_drop_actions)\n> + {\n> + SVariableXActActionItem *xact_ai =\n> + (SVariableXActActionItem *) lfirst(l);\n> +\n> + /* Iterate only over non dropped entries */\n> + if (xact_ai->deleting_subid == InvalidSubTransactionId)\n> + {\n> + Assert(xact_ai->action == SVAR_ON_COMMIT_DROP);\n>\n> The assert sould probably be in the block above.\n>\n\nmoved\n\n\n>\n> + * We want to reset session variable (release it from\n> + * local memory) when RESET is required or when session\n> + * variable was removed explicitly (DROP VARIABLE) or\n> + * implicitly (ON COMMIT DROP). Explicit releasing should\n> + * be done only if the transaction is commited.\n> + */\n> + if ((xact_ai->action == SVAR_RESET) ||\n> + (xact_ai->action == SVAR_ON_COMMIT_RESET &&\n> + xact_ai->deleting_subid == InvalidSubTransactionId &&\n> + isCommit))\n> + free_session_variable_varid(xact_ai->varid);\n>\n> This chunk is a bit hard to follow. Also, for SVAR_RESET wouldn't it be\n> better\n> to only make the svar invalid and keep it in the htab? If so, this could\n> be\n> split in two different branches which would be easier to follow.\n>\n\nAfter some experiments, I think it is more simple to remove the svar entry\nin htab. It reduces the state space, and variable initialization once per\ntransaction is not expensive. The problem is in necessary xact action\nregistration and now I can call it simply just from init_session_variable.\nI updated comments there\n\n\n\n>\n> + if (!isCommit &&\n> + xact_ai->creating_subid == mySubid &&\n> + xact_ai->action != SVAR_RESET &&\n> + xact_ai->action != SVAR_RECHECK)\n> + {\n> + /* cur_item must be removed */\n> + xact_reset_actions =\n> foreach_delete_current(xact_reset_actions, cur_item);\n> + pfree(xact_ai);\n>\n> I think that be definition only the SVAR_ON_COMMIT_DROP (cleaning entry\n> for a\n> dropped session variable) will ever need to be removed there, so we should\n> check for that instead of not being something else?\n>\n>\nfixed\n\n\n>\n> + /*\n> + * Prepare session variables, if not prepared in queryDesc\n> + */\n> + if (queryDesc->num_session_variables > 0)\n>\n\nI don't understand that comment.\n>\n\nI changed this comment\n\n\n\n>\n> +static void\n> +svariableStartupReceiver(DestReceiver *self, int operation, TupleDesc\n> typeinfo)\n> +{\n> + svariableState *myState = (svariableState *) self;\n> + int natts = typeinfo->natts;\n> + int outcols = 0;\n> + int i;\n> +\n> + for (i = 0; i < natts; i++)\n> + {\n> + Form_pg_attribute attr = TupleDescAttr(typeinfo, i);\n> +\n> + if (attr->attisdropped)\n> + continue;\n> +\n> + if (++outcols > 1)\n> + elog(ERROR, \"svariable DestReceiver can take only one\n> attribute\");\n> +\n> + myState->typid = attr->atttypid;\n> + myState->typmod = attr->atttypmod;\n> + myState->typlen = attr->attlen;\n> + myState->slot_offset = i;\n> + }\n> +\n> + myState->rows = 0;\n> +}\n>\n> Maybe add an initial Assert to make sure that caller did call\n> SetVariableDestReceiverParams(), and final check that one attribute was\n> found?\n>\n\ndone\n\n\n>\n> @@ -1794,15 +1840,39 @@ fix_expr_common(PlannerInfo *root, Node *node)\n> g->cols = cols;\n> }\n> }\n> + else if (IsA(node, Param))\n> + {\n> + Param *p = (Param *) node;\n> +\n> + if (p->paramkind == PARAM_VARIABLE)\n> + {\n> + PlanInvalItem *inval_item = makeNode(PlanInvalItem);\n> +\n> + /* paramid is still session variable id */\n> + inval_item->cacheId = VARIABLEOID;\n> + inval_item->hashValue = GetSysCacheHashValue1(VARIABLEOID,\n> +\n> ObjectIdGetDatum(p->paramvarid));\n> +\n> + /* Append this variable to global, register dependency */\n> + root->glob->invalItems = lappend(root->glob->invalItems,\n> + inval_item);\n> + }\n> + }\n>\n> I didn't see any test covering invalidation of cached plan using session\n> variables. Could you add some? While at it, maybe use different values\n> on the\n> sesssion_variable.sql tests rather than 100 in many places, so it's easier\n> to\n> identifier which case broke in case of problem.\n>\n\nI created new tests there\n\n>\n> +static Node *\n> +makeParamSessionVariable(ParseState *pstate,\n> + Oid varid, Oid typid, int32 typmod, Oid collid,\n> + char *attrname, int location)\n> +{\n> [...]\n> + /*\n> + * There are two ways to access session variables - direct, used by\n> simple\n> + * plpgsql expressions, where it is not necessary to emulate stability.\n> + * And Buffered access, which is used everywhere else. We should ensure\n> + * stable values, and because session variables are global, then we\n> should\n> + * work with copied values instead of directly accessing variables. For\n> + * direct access, the varid is best. For buffered access, we need\n> + * to assign an index to the buffer - later, when we know what\n> variables are\n> + * used. Now, we just remember, so we use session variables.\n>\n> I don't understand the last part, starting with \"For buffered access, we\n> need...\". Also, the beginning of the comment seems like something more\n> general\n> and may be moved somewhere, maybe at the beginning of sessionvariable.c?\n>\n\nmoved to sessionvariable.c and modified.\n\n\n> + * stmt->query is SelectStmt node. An tranformation of\n> + * this node doesn't support SetToDefault node. Instead injecting\n> + * of transformSelectStmt or parse state, we can directly\n> + * transform target list here if holds SetToDefault node.\n> + */\n> + if (stmt->set_default)\n>\n> I don't understand this comment. Especially since the next\n> transformTargetList() will emit SetToDefault node that will be handled\n> later in\n> that function and then in RewriteQuery.\n>\n\nThis is messy, sorry. SelectStmt doesn't support SetToDefault. LetStmt\nsupports it. I reworded.\n\n\n> + /*\n> + * rewrite SetToDefaults needs varid in Query structure\n> + */\n> + query->resultVariable = varid;\n>\n> I also don't understand that comment. Is is always set just in case\n> there's a\n> SetToDefault, or something else?\n>\n\nThis comment is not complete. This value is required by QueryRewriter (for\nreplacement of the SetToDefault node by defexpr). It is required for\nacquiring locks, and for execution.\n\nI rewrote this comment\n\n\n\n>\n> + /* translate paramvarid to session variable name */\n> + if (param->paramkind == PARAM_VARIABLE)\n> + {\n> + appendStringInfo(context->buf, \"%s\",\n> +\n> generate_session_variable_name(param->paramvarid));\n> + return;\n> + }\n>\n> A bit more work seems to be needed for deparsing session variables:\n>\n\n> # create variable myvar text;\n> CREATE VARIABLE\n>\n> # create view myview as select myvar;\n> CREATE VIEW\n>\n> # \\d+ myview\n> View \"public.myview\"\n> Column | Type | Collation | Nullable | Default | Storage | Description\n> --------+------+-----------+----------+---------+----------+-------------\n> myvar | text | | | | extended |\n> View definition:\n> SELECT myvar AS myvar;\n>\n> There shouldn't be an explicit alias I think.\n>\n\nthis issue was described in other thread\n\nI am sending rebased, updated patches. The type check is not implemented\nyet.\n\nRegards\n\nPavel", "msg_date": "Tue, 21 Jun 2022 10:46:06 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "fixed tabcomplete reported by patch tester\n\nRegards\n\nPavel", "msg_date": "Tue, 21 Jun 2022 22:24:14 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nfresh rebase + type check. Before returning any value, the related type is\nchecked if it is valid still\n\nRegards\n\nPavel", "msg_date": "Tue, 5 Jul 2022 08:42:09 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi Pavel,\n\nOn Tue, Jul 05, 2022 at 08:42:09AM +0200, Pavel Stehule wrote:\n> Hi\n>\n> fresh rebase + type check. Before returning any value, the related type is\n> checked if it is valid still\n\nGreat news, thanks a lot for keeping working on it! I'm still in PTO since\nlast Friday, but I'm planning to start reviewing this patch as soon as I come\nback. It might take a while as my knowledge of this patch are a bit blurry but\nhopefully it shouldn't take too long.\n\n\n", "msg_date": "Tue, 5 Jul 2022 18:50:32 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "út 5. 7. 2022 v 12:50 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi Pavel,\n>\n> On Tue, Jul 05, 2022 at 08:42:09AM +0200, Pavel Stehule wrote:\n> > Hi\n> >\n> > fresh rebase + type check. Before returning any value, the related type\n> is\n> > checked if it is valid still\n>\n> Great news, thanks a lot for keeping working on it! I'm still in PTO since\n> last Friday, but I'm planning to start reviewing this patch as soon as I\n> come\n> back. It might take a while as my knowledge of this patch are a bit\n> blurry but\n> hopefully it shouldn't take too long.\n>\n\nThank you\n\nPavel\n\nút 5. 7. 2022 v 12:50 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:Hi Pavel,\n\nOn Tue, Jul 05, 2022 at 08:42:09AM +0200, Pavel Stehule wrote:\n> Hi\n>\n> fresh rebase + type check. Before returning any value, the related type is\n> checked if it is valid still\n\nGreat news, thanks a lot for keeping working on it!  I'm still in PTO since\nlast Friday, but I'm planning to start reviewing this patch as soon as I come\nback.  It might take a while as my knowledge of this patch are a bit blurry but\nhopefully it shouldn't take too long.Thank youPavel", "msg_date": "Tue, 5 Jul 2022 13:19:42 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nút 5. 7. 2022 v 8:42 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> fresh rebase + type check. Before returning any value, the related type is\n> checked if it is valid still\n>\n\nThis set of patches should to help me with investigation of regress test\nfail reported by cfbot\n\nRegards\n\nPavel\n\n\n\n>\n> Regards\n>\n> Pavel\n>", "msg_date": "Wed, 6 Jul 2022 22:30:31 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "st 6. 7. 2022 v 22:30 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> út 5. 7. 2022 v 8:42 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n>> Hi\n>>\n>> fresh rebase + type check. Before returning any value, the related type\n>> is checked if it is valid still\n>>\n>\n> This set of patches should to help me with investigation of regress test\n> fail reported by cfbot\n>\n\nnext step\n\n\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>", "msg_date": "Thu, 7 Jul 2022 07:49:47 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Wed, Jul 06, 2022 at 10:30:31PM +0200, Pavel Stehule wrote:\n> This set of patches should to help me with investigation of regress test\n> fail reported by cfbot\n\nDo you know you can do the same as what cfbot does under your own github\naccount ? Please see: src/tools/ci/README\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 7 Jul 2022 01:43:11 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "čt 7. 7. 2022 v 8:43 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:\n\n> On Wed, Jul 06, 2022 at 10:30:31PM +0200, Pavel Stehule wrote:\n> > This set of patches should to help me with investigation of regress test\n> > fail reported by cfbot\n>\n> Do you know you can do the same as what cfbot does under your own github\n> account ? Please see: src/tools/ci/README\n>\n\nI didn't know it. I am sorry for sending garbage to the mailing list.\n\nThank you for information\n\nPavel\n\n\n\n>\n> --\n> Justin\n>\n\nčt 7. 7. 2022 v 8:43 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:On Wed, Jul 06, 2022 at 10:30:31PM +0200, Pavel Stehule wrote:\n> This set of patches should to help me with investigation of regress test\n> fail reported by cfbot\n\nDo you know you can do the same as what cfbot does under your own github\naccount ?  Please see: src/tools/ci/READMEI didn't know it. I am sorry for sending garbage to the mailing list.Thank you for informationPavel \n\n-- \nJustin", "msg_date": "Thu, 7 Jul 2022 08:52:17 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nI hope I fixed an broken test on msvc\n\nRegards\n\nPavel", "msg_date": "Thu, 7 Jul 2022 14:30:10 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nfresh rebase\n\nRegards\n\nPavel", "msg_date": "Sat, 9 Jul 2022 20:57:00 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nnew update of session variable;s implementation\n\n- fresh rebase\n- new possibility to trace execution with DEBUG1 notification\n- new SRF function pg_debug_show_used_session_variables that returns\ncontent of sessionvars hashtab\n- redesign of work with list of variables for reset, recheck, on commit\ndrop, on commit reset\n\nRegards\n\nPavel", "msg_date": "Thu, 21 Jul 2022 08:16:19 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On 7/21/22 08:16, Pavel Stehule wrote:\n> Hi\n> \n> new update of session variable;s implementation\n> \n> - fresh rebase\n> - new possibility to trace execution with DEBUG1 notification\n> - new SRF function pg_debug_show_used_session_variables that returns \n> content of sessionvars hashtab\n> - redesign of work with list of variables for reset, recheck, on commit \n> drop, on commit reset\n\nHi Pavel,\n\nI don't know exactly what failed but the docs (html/pdf) don't build:\n\n\ncd ~/pg_stuff/pg_sandbox/pgsql.schema_variables/doc/src/sgml\n\n$ make html\n/usr/bin/xmllint --path . --noout --valid postgres.sgml\npostgres.sgml:374: element link: validity error : IDREF attribute \nlinkend references an unknown ID \"catalog-pg-variable\"\nmake: *** [Makefile:135: html-stamp] Error 4\n\n\n\nErik Rijkers\n\n\n> \n> Regards\n> \n> Pavel\n> \n\n\n", "msg_date": "Thu, 21 Jul 2022 09:09:47 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nOn Thu, Jul 21, 2022 at 09:09:47AM +0200, Erik Rijkers wrote:\n> On 7/21/22 08:16, Pavel Stehule wrote:\n> > Hi\n> > \n> > new update of session variable;s implementation\n> > \n> > - fresh rebase\n> > - new possibility to trace execution with DEBUG1 notification\n> > - new SRF function pg_debug_show_used_session_variables that returns\n> > content of sessionvars hashtab\n> > - redesign of work with list of variables for reset, recheck, on commit\n> > drop, on commit reset\n\nThanks for working on those! I will keep reviewing the patchset.\n\n> I don't know exactly what failed but the docs (html/pdf) don't build:\n> \n> cd ~/pg_stuff/pg_sandbox/pgsql.schema_variables/doc/src/sgml\n> \n> $ make html\n> /usr/bin/xmllint --path . --noout --valid postgres.sgml\n> postgres.sgml:374: element link: validity error : IDREF attribute linkend\n> references an unknown ID \"catalog-pg-variable\"\n> make: *** [Makefile:135: html-stamp] Error 4\n\nApparently most of the changes in catalogs.sgml didn't survive the last rebase.\nI do see the needed section in v20220709-0012-documentation.patch:\n\n> + <sect1 id=\"catalog-pg-variable\">\n> + <title><structname>pg_variable</structname></title>\n> [...]\n\n\n", "msg_date": "Thu, 21 Jul 2022 15:34:05 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "čt 21. 7. 2022 v 9:34 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi,\n>\n> On Thu, Jul 21, 2022 at 09:09:47AM +0200, Erik Rijkers wrote:\n> > On 7/21/22 08:16, Pavel Stehule wrote:\n> > > Hi\n> > >\n> > > new update of session variable;s implementation\n> > >\n> > > - fresh rebase\n> > > - new possibility to trace execution with DEBUG1 notification\n> > > - new SRF function pg_debug_show_used_session_variables that returns\n> > > content of sessionvars hashtab\n> > > - redesign of work with list of variables for reset, recheck, on commit\n> > > drop, on commit reset\n>\n> Thanks for working on those! I will keep reviewing the patchset.\n>\n> > I don't know exactly what failed but the docs (html/pdf) don't build:\n> >\n> > cd ~/pg_stuff/pg_sandbox/pgsql.schema_variables/doc/src/sgml\n> >\n> > $ make html\n> > /usr/bin/xmllint --path . --noout --valid postgres.sgml\n> > postgres.sgml:374: element link: validity error : IDREF attribute linkend\n> > references an unknown ID \"catalog-pg-variable\"\n> > make: *** [Makefile:135: html-stamp] Error 4\n>\n> Apparently most of the changes in catalogs.sgml didn't survive the last\n> rebase.\n> I do see the needed section in v20220709-0012-documentation.patch:\n>\n> > + <sect1 id=\"catalog-pg-variable\">\n> > + <title><structname>pg_variable</structname></title>\n> > [...]\n>\n\nshould be fixed now\n\nThank you for check\n\nRegards\n\nPavel", "msg_date": "Fri, 22 Jul 2022 10:58:25 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nOn Fri, Jul 22, 2022 at 10:58:25AM +0200, Pavel Stehule wrote:\n> > Apparently most of the changes in catalogs.sgml didn't survive the last\n> > rebase.\n> > I do see the needed section in v20220709-0012-documentation.patch:\n> >\n> > > + <sect1 id=\"catalog-pg-variable\">\n> > > + <title><structname>pg_variable</structname></title>\n> > > [...]\n> >\n>\n> should be fixed now\n\nThanks! I confirm that the documentation compiles now.\n\nAs mentioned off-list, I still think that the main comment in sessionvariable.c\nneeds to be adapted to the new approach. At the very least it still refers to\nthe previous 2 lists, but as far as I can see there are now 4 lists:\n\n+ /* Both lists hold fields of SVariableXActActionItem type */\n+ static List *xact_on_commit_drop_actions = NIL;\n+ static List *xact_on_commit_reset_actions = NIL;\n+\n+ /*\n+ * the ON COMMIT DROP and ON TRANSACTION END RESET variables\n+ * are purged from memory every time.\n+ */\n+ static List *xact_reset_varids = NIL;\n+\n+ /*\n+ * Holds list variable's id that that should be\n+ * checked against system catalog if still live.\n+ */\n+ static List *xact_recheck_varids = NIL;\n\nApart from that, I'm not sure how much of the previous behavior changed.\n\nIt would be easier to review the new patchset having some up to date general\ndescription of the approach. If that's overall the same, just implemented\nslightly differently I will just go ahead and dig into the patchset (although\nthe comments will still have to be changed eventually).\n\nAlso, one of the things that changes since the last version is:\n\n@@ -1980,15 +1975,13 @@ AtEOSubXact_SessionVariable_on_xact_actions(bool isCommit, SubTransactionId mySu\n */\n foreach(cur_item, xact_on_commit_reset_actions)\n {\n SVariableXActActionItem *xact_ai =\n (SVariableXActActionItem *) lfirst(cur_item);\n\n- if (!isCommit &&\n- xact_ai->creating_subid == mySubid &&\n- xact_ai->action == SVAR_ON_COMMIT_DROP)\n+ if (!isCommit && xact_ai->creating_subid == mySubid)\n\nWe previously discussed this off-line, but for some quick context the test was\nbuggy as it wasn't possible to have an SVAR_ON_COMMIT_DROP action in the\nxact_on_commit_reset_actions list. However I don't see any change in the\nregression tests since the last version and the tests are all green in both\nversions.\n\nIt means that was fixed but there's no test covering it. The local memory\nmanagement is probably the hardest part of this patchset, so I'm a bit worried\nif there's nothing that can catch a bug leading to leaked values or entries in\nsome processing list. Do you think it's possible to add some test that would\nhave caught the previous bug?\n\n\n", "msg_date": "Sun, 24 Jul 2022 19:12:52 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On 7/22/22 10:58, Pavel Stehule wrote:\n> \n> čt 21. 7. 2022 v 9:34 odesílatel Julien Rouhaud <rjuju123@gmail.com \n> <mailto:rjuju123@gmail.com>> napsal:\n> \n > [v20220722] patches\n\nHi Pavel,\n\nThanks, docs now build.\n\nAttached a few small text-changes.\n\nAlso, the pg_restore-doc still has the old name 'schema_variable' \ninstead of session_variable:\n\n-A schema_variable\n--variable=schema_variable\n\nSurely those should be changed as well.\n\nErik Rijkers", "msg_date": "Sun, 24 Jul 2022 15:39:32 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi Erik,\n\nOn Sun, Jul 24, 2022 at 03:39:32PM +0200, Erik Rijkers wrote:\n> Attached a few small text-changes.\n\nWhen you send patches like this, could you rename them to something other than\n*.patch and *.diff ?\n\nOtherwise, cfbot tries to apply *only* your patches to master, which fails due\nto missing the original patches that your changes are on top of, and makes it\nlook like the author's patch needs to be rebased.\nhttp://cfbot.cputube.org/pavel-stehule.html - Apply patches: FAILED\n\nAlternately, (especially if your patch fixes a bug), you can resend the\nauthor's patches, rebased, as [1.patch, ..., N.patch] plus your changes as\nN+1.patch. Then, cfbot tests your patches, and the author can easily review\nand then integrate your changes. (This is especially nice if the patches\ncurrently need to be rebased, and you can make the cfbot pass at the same time\nas sending fixes).\n\nCheers,\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 24 Jul 2022 14:09:59 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nne 24. 7. 2022 v 13:12 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> Hi,\n>\n> On Fri, Jul 22, 2022 at 10:58:25AM +0200, Pavel Stehule wrote:\n> > > Apparently most of the changes in catalogs.sgml didn't survive the last\n> > > rebase.\n> > > I do see the needed section in v20220709-0012-documentation.patch:\n> > >\n> > > > + <sect1 id=\"catalog-pg-variable\">\n> > > > + <title><structname>pg_variable</structname></title>\n> > > > [...]\n> > >\n> >\n> > should be fixed now\n>\n> Thanks! I confirm that the documentation compiles now.\n>\n> As mentioned off-list, I still think that the main comment in\n> sessionvariable.c\n> needs to be adapted to the new approach. At the very least it still\n> refers to\n> the previous 2 lists, but as far as I can see there are now 4 lists:\n>\n> + /* Both lists hold fields of SVariableXActActionItem type */\n> + static List *xact_on_commit_drop_actions = NIL;\n> + static List *xact_on_commit_reset_actions = NIL;\n> +\n> + /*\n> + * the ON COMMIT DROP and ON TRANSACTION END RESET variables\n> + * are purged from memory every time.\n> + */\n> + static List *xact_reset_varids = NIL;\n> +\n> + /*\n> + * Holds list variable's id that that should be\n> + * checked against system catalog if still live.\n> + */\n> + static List *xact_recheck_varids = NIL;\n>\n> Apart from that, I'm not sure how much of the previous behavior changed.\n>\n> It would be easier to review the new patchset having some up to date\n> general\n> description of the approach. If that's overall the same, just implemented\n> slightly differently I will just go ahead and dig into the patchset\n> (although\n> the comments will still have to be changed eventually).\n>\n> Also, one of the things that changes since the last version is:\n>\n> @@ -1980,15 +1975,13 @@ AtEOSubXact_SessionVariable_on_xact_actions(bool\n> isCommit, SubTransactionId mySu\n> */\n> foreach(cur_item, xact_on_commit_reset_actions)\n> {\n> SVariableXActActionItem *xact_ai =\n> (SVariableXActActionItem *)\n> lfirst(cur_item);\n>\n> - if (!isCommit &&\n> - xact_ai->creating_subid == mySubid &&\n> - xact_ai->action == SVAR_ON_COMMIT_DROP)\n> + if (!isCommit && xact_ai->creating_subid == mySubid)\n>\n> We previously discussed this off-line, but for some quick context the test\n> was\n> buggy as it wasn't possible to have an SVAR_ON_COMMIT_DROP action in the\n> xact_on_commit_reset_actions list. However I don't see any change in the\n> regression tests since the last version and the tests are all green in both\n> versions.\n>\n> It means that was fixed but there's no test covering it. The local memory\n> management is probably the hardest part of this patchset, so I'm a bit\n> worried\n> if there's nothing that can catch a bug leading to leaked values or\n> entries in\n> some processing list. Do you think it's possible to add some test that\n> would\n> have caught the previous bug?\n>\n\nI am sending an updated patch. I had to modify sinval message handling.\nPrevious implementation was not robust and correct (there was some\npossibility, so value stored in session's variable was lost after aborted\ndrop variable. There are new regress tests requested by Julien and some\nothers describing the mentioned issue. I rewrote the implementation's\ndescription part in sessionvariable.c.\n\nErik's patches are merged. Thank you for them.\n\nRegards\n\nPavel", "msg_date": "Wed, 27 Jul 2022 21:59:18 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nOn Wed, Jul 27, 2022 at 09:59:18PM +0200, Pavel Stehule wrote:\n> \n> ne 24. 7. 2022 v 13:12 odes�latel Julien Rouhaud <rjuju123@gmail.com>\n> napsal:\n> \n> > Hi,\n> >\n> > On Fri, Jul 22, 2022 at 10:58:25AM +0200, Pavel Stehule wrote:\n> > > > Apparently most of the changes in catalogs.sgml didn't survive the last\n> > > > rebase.\n> > > > I do see the needed section in v20220709-0012-documentation.patch:\n> > > >\n> > > > > + <sect1 id=\"catalog-pg-variable\">\n> > > > > + <title><structname>pg_variable</structname></title>\n> > > > > [...]\n> > > >\n> > >\n> > > should be fixed now\n> >\n> > Thanks! I confirm that the documentation compiles now.\n> >\n> > As mentioned off-list, I still think that the main comment in\n> > sessionvariable.c\n> > needs to be adapted to the new approach. At the very least it still\n> > refers to\n> > the previous 2 lists, but as far as I can see there are now 4 lists:\n> >\n> > + /* Both lists hold fields of SVariableXActActionItem type */\n> > + static List *xact_on_commit_drop_actions = NIL;\n> > + static List *xact_on_commit_reset_actions = NIL;\n> > +\n> > + /*\n> > + * the ON COMMIT DROP and ON TRANSACTION END RESET variables\n> > + * are purged from memory every time.\n> > + */\n> > + static List *xact_reset_varids = NIL;\n> > +\n> > + /*\n> > + * Holds list variable's id that that should be\n> > + * checked against system catalog if still live.\n> > + */\n> > + static List *xact_recheck_varids = NIL;\n> >\n> > Apart from that, I'm not sure how much of the previous behavior changed.\n> >\n> > It would be easier to review the new patchset having some up to date\n> > general\n> > description of the approach. If that's overall the same, just implemented\n> > slightly differently I will just go ahead and dig into the patchset\n> > (although\n> > the comments will still have to be changed eventually).\n> >\n> > Also, one of the things that changes since the last version is:\n> >\n> > @@ -1980,15 +1975,13 @@ AtEOSubXact_SessionVariable_on_xact_actions(bool\n> > isCommit, SubTransactionId mySu\n> > */\n> > foreach(cur_item, xact_on_commit_reset_actions)\n> > {\n> > SVariableXActActionItem *xact_ai =\n> > (SVariableXActActionItem *)\n> > lfirst(cur_item);\n> >\n> > - if (!isCommit &&\n> > - xact_ai->creating_subid == mySubid &&\n> > - xact_ai->action == SVAR_ON_COMMIT_DROP)\n> > + if (!isCommit && xact_ai->creating_subid == mySubid)\n> >\n> > We previously discussed this off-line, but for some quick context the test\n> > was\n> > buggy as it wasn't possible to have an SVAR_ON_COMMIT_DROP action in the\n> > xact_on_commit_reset_actions list. However I don't see any change in the\n> > regression tests since the last version and the tests are all green in both\n> > versions.\n> >\n> > It means that was fixed but there's no test covering it. The local memory\n> > management is probably the hardest part of this patchset, so I'm a bit\n> > worried\n> > if there's nothing that can catch a bug leading to leaked values or\n> > entries in\n> > some processing list. Do you think it's possible to add some test that\n> > would\n> > have caught the previous bug?\n> >\n> \n> I am sending an updated patch. I had to modify sinval message handling.\n> Previous implementation was not robust and correct (there was some\n> possibility, so value stored in session's variable was lost after aborted\n> drop variable. There are new regress tests requested by Julien and some\n> others describing the mentioned issue. I rewrote the implementation's\n> description part in sessionvariable.c.\n\nThanks a lot, that's very helpful!\n\nI looked at the new description and I'm not sure that I understand the need for\nthe \"format change\" code that tries to detect whether the underlying types was\nmodified. It seems quite fragile, wouldn't it be better to have the same\nbehavior as for relation (detect and prevent such changes in the first place),\nsince both cases share the same requirements about underlying data types? For\ninstance, it should be totally acceptable to drop an attribute from a custom\ndata type if a session variable is using it, same as if a table is using it but\nas is it would be rejected for session variables.\n\nWhile at it, the new comments contain a lot of non breakable spaces rather than\nnormal spaces. I also just realized that there's a sessionvariable.c while the\nheader is named session_variable.h.\n\n\n", "msg_date": "Mon, 1 Aug 2022 12:53:58 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "po 1. 8. 2022 v 6:54 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi,\n>\n> On Wed, Jul 27, 2022 at 09:59:18PM +0200, Pavel Stehule wrote:\n> >\n> > ne 24. 7. 2022 v 13:12 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n> > napsal:\n> >\n> > > Hi,\n> > >\n> > > On Fri, Jul 22, 2022 at 10:58:25AM +0200, Pavel Stehule wrote:\n> > > > > Apparently most of the changes in catalogs.sgml didn't survive the\n> last\n> > > > > rebase.\n> > > > > I do see the needed section in v20220709-0012-documentation.patch:\n> > > > >\n> > > > > > + <sect1 id=\"catalog-pg-variable\">\n> > > > > > + <title><structname>pg_variable</structname></title>\n> > > > > > [...]\n> > > > >\n> > > >\n> > > > should be fixed now\n> > >\n> > > Thanks! I confirm that the documentation compiles now.\n> > >\n> > > As mentioned off-list, I still think that the main comment in\n> > > sessionvariable.c\n> > > needs to be adapted to the new approach. At the very least it still\n> > > refers to\n> > > the previous 2 lists, but as far as I can see there are now 4 lists:\n> > >\n> > > + /* Both lists hold fields of SVariableXActActionItem type */\n> > > + static List *xact_on_commit_drop_actions = NIL;\n> > > + static List *xact_on_commit_reset_actions = NIL;\n> > > +\n> > > + /*\n> > > + * the ON COMMIT DROP and ON TRANSACTION END RESET variables\n> > > + * are purged from memory every time.\n> > > + */\n> > > + static List *xact_reset_varids = NIL;\n> > > +\n> > > + /*\n> > > + * Holds list variable's id that that should be\n> > > + * checked against system catalog if still live.\n> > > + */\n> > > + static List *xact_recheck_varids = NIL;\n> > >\n> > > Apart from that, I'm not sure how much of the previous behavior\n> changed.\n> > >\n> > > It would be easier to review the new patchset having some up to date\n> > > general\n> > > description of the approach. If that's overall the same, just\n> implemented\n> > > slightly differently I will just go ahead and dig into the patchset\n> > > (although\n> > > the comments will still have to be changed eventually).\n> > >\n> > > Also, one of the things that changes since the last version is:\n> > >\n> > > @@ -1980,15 +1975,13 @@\n> AtEOSubXact_SessionVariable_on_xact_actions(bool\n> > > isCommit, SubTransactionId mySu\n> > > */\n> > > foreach(cur_item, xact_on_commit_reset_actions)\n> > > {\n> > > SVariableXActActionItem *xact_ai =\n> > > (SVariableXActActionItem *)\n> > > lfirst(cur_item);\n> > >\n> > > - if (!isCommit &&\n> > > - xact_ai->creating_subid == mySubid &&\n> > > - xact_ai->action == SVAR_ON_COMMIT_DROP)\n> > > + if (!isCommit && xact_ai->creating_subid == mySubid)\n> > >\n> > > We previously discussed this off-line, but for some quick context the\n> test\n> > > was\n> > > buggy as it wasn't possible to have an SVAR_ON_COMMIT_DROP action in\n> the\n> > > xact_on_commit_reset_actions list. However I don't see any change in\n> the\n> > > regression tests since the last version and the tests are all green in\n> both\n> > > versions.\n> > >\n> > > It means that was fixed but there's no test covering it. The local\n> memory\n> > > management is probably the hardest part of this patchset, so I'm a bit\n> > > worried\n> > > if there's nothing that can catch a bug leading to leaked values or\n> > > entries in\n> > > some processing list. Do you think it's possible to add some test that\n> > > would\n> > > have caught the previous bug?\n> > >\n> >\n> > I am sending an updated patch. I had to modify sinval message handling.\n> > Previous implementation was not robust and correct (there was some\n> > possibility, so value stored in session's variable was lost after aborted\n> > drop variable. There are new regress tests requested by Julien and some\n> > others describing the mentioned issue. I rewrote the implementation's\n> > description part in sessionvariable.c.\n>\n> Thanks a lot, that's very helpful!\n>\n> I looked at the new description and I'm not sure that I understand the\n> need for\n> the \"format change\" code that tries to detect whether the underlying types\n> was\n> modified. It seems quite fragile, wouldn't it be better to have the same\n> behavior as for relation (detect and prevent such changes in the first\n> place),\n> since both cases share the same requirements about underlying data types?\n> For\n> instance, it should be totally acceptable to drop an attribute from a\n> custom\n> data type if a session variable is using it, same as if a table is using\n> it but\n> as is it would be rejected for session variables.\n>\n\nThis is the first implementation and my strategy is \"to be safe and to be\nstrict\". I did tests I know, so the test of compatibility of composite\ntypes can be more tolerant. But I use this test to test my identity against\noid overflow, and I don't feel comfortable if I write this test too\ntolerantly. For implementation of a more precious test I need to save a\nsignature of attributes. So the test should not be done just on\ncompatibility of types from TupleDesc, but it should to check attributes\noid's. I had an idea to implement it in the next stage, and for this stage\njust to require compatibility of the vector of types.\n\nCan this enhanced check be implemented later or do you think so it should\nbe implemented now? I'll check how much new code it needs.\n\n\n>\n> While at it, the new comments contain a lot of non breakable spaces rather\n> than\n> normal spaces. I also just realized that there's a sessionvariable.c\n> while the\n> header is named session_variable.h.\n>\n\nMy bad - I used gmail as a spellchecker, and it wrote some white spaces\nthere :-/\n\nshould be fixed now", "msg_date": "Mon, 1 Aug 2022 08:24:14 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nI am sending fresh update\n\n- enhanced work with composite types - now the used composite type can be\nenhanced, reduced and stored value is converted to expected format\n- enhancing find_composite_type_dependencies to support session variables,\nso the type of any field of used composite type cannot be changed\n\nRegards\n\nPavel", "msg_date": "Fri, 19 Aug 2022 15:57:29 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "pá 19. 8. 2022 v 15:57 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> I am sending fresh update\n>\n> - enhanced work with composite types - now the used composite type can be\n> enhanced, reduced and stored value is converted to expected format\n> - enhancing find_composite_type_dependencies to support session variables,\n> so the type of any field of used composite type cannot be changed\n>\n\nupdate - fix cpp check\n\n\n\n> Regards\n>\n> Pavel\n>", "msg_date": "Fri, 19 Aug 2022 17:29:45 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "> diff --git a/src/backend/parser/parse_relation.c b/src/backend/parser/parse_relation.c\n> index f6b740df0a..b3bee39457 100644\n> --- a/src/backend/parser/parse_relation.c\n> +++ b/src/backend/parser/parse_relation.c\n> @@ -3667,8 +3667,8 @@ errorMissingColumn(ParseState *pstate,\n> \t\tereport(ERROR,\n> \t\t\t\t(errcode(ERRCODE_UNDEFINED_COLUMN),\n> \t\t\t\t relname ?\n> -\t\t\t\t errmsg(\"column %s.%s does not exist\", relname, colname) :\n> -\t\t\t\t errmsg(\"column \\\"%s\\\" does not exist\", colname),\n> +\t\t\t\t errmsg(\"column or variable %s.%s does not exist\", relname, colname) :\n> +\t\t\t\t errmsg(\"column or variable \\\"%s\\\" does not exist\", colname),\n> \t\t\t\t state->rfirst ? closestfirst ?\n> \t\t\t\t errhint(\"Perhaps you meant to reference the column \\\"%s.%s\\\".\",\n> \t\t\t\t\t\t state->rfirst->eref->aliasname, closestfirst) :\n\nThis is in your patch 12. I wonder -- if relname is not null, then\nsurely this is a column and not a variable, right? So only the second\nerrmsg() here should be changed, and the first one should remain as in\nthe original.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 19 Aug 2022 22:53:52 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Op 19-08-2022 om 17:29 schreef Pavel Stehule:\n> pá 19. 8. 2022 v 15:57 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n> \n>> Hi\n>>\n>> I am sending fresh update\n>>\n>> - enhanced work with composite types - now the used composite type can be\n>> enhanced, reduced and stored value is converted to expected format\n>> - enhancing find_composite_type_dependencies to support session variables,\n>> so the type of any field of used composite type cannot be changed\n>>\n> \n> update - fix cpp check\n\nv20220819-2-0001-Catalogue-support-for-session-variables.patch\nv20220819-2-0002-session-variables.patch\nv20220819-2-0003-typecheck-check-of-consistency-of-format-of-stored-v.patch\nv20220819-2-0004-LET-command.patch\nv20220819-2-0005-Support-of-LET-command-in-PLpgSQL.patch\nv20220819-2-0006-DISCARD-VARIABLES-command.patch\nv20220819-2-0007-Enhancing-psql-for-session-variables.patch\nv20220819-2-0008-Possibility-to-dump-session-variables-by-pg_dump.patch\nv20220819-2-0009-typedefs.patch\nv20220819-2-0010-Regress-tests-for-session-variables.patch\nv20220819-2-0011-fix.patch\nv20220819-2-0012-This-patch-changes-error-message-column-doesn-t-exis.patch\nv20220819-2-0013-documentation.patch\n\nmake check fails as a result of the errors in the attached \nsession_variables.out.\n\n\nErik\n\n> \n>> Regards\n>>\n>> Pavel\n>>\n>", "msg_date": "Sat, 20 Aug 2022 15:32:20 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Op 20-08-2022 om 15:32 schreef Erik Rijkers:\n> Op 19-08-2022 om 17:29 schreef Pavel Stehule:\n> \n> make check  fails as a result of the errors in the attached \n> session_variables.out.\n> \n\n\nSorry, that should have been this diffs file, of course (attached).\n\n\nErik", "msg_date": "Sat, 20 Aug 2022 15:36:42 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "so 20. 8. 2022 v 15:36 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n\n> Op 20-08-2022 om 15:32 schreef Erik Rijkers:\n> > Op 19-08-2022 om 17:29 schreef Pavel Stehule:\n> >\n> > make check fails as a result of the errors in the attached\n> > session_variables.out.\n> >\n>\n>\n> Sorry, that should have been this diffs file, of course (attached).\n>\n\nIt looks like some problem with not well initialized memory, but I have no\nidea how it is possible. What are your configure options?\n\n\n\n>\n> Erik\n\nso 20. 8. 2022 v 15:36 odesílatel Erik Rijkers <er@xs4all.nl> napsal:Op 20-08-2022 om 15:32 schreef Erik Rijkers:\n> Op 19-08-2022 om 17:29 schreef Pavel Stehule:\n> \n> make check  fails as a result of the errors in the attached \n> session_variables.out.\n> \n\n\nSorry, that should have been this diffs file, of course (attached).It looks like some problem with not well initialized memory, but I have no idea how it is possible. What are your configure options? \n\n\nErik", "msg_date": "Sat, 20 Aug 2022 15:41:14 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "\n\nOp 20-08-2022 om 15:41 schreef Pavel Stehule:\n> so 20. 8. 2022 v 15:36 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n> \n>> Op 20-08-2022 om 15:32 schreef Erik Rijkers:\n>>> Op 19-08-2022 om 17:29 schreef Pavel Stehule:\n>>>\n>>> make check fails as a result of the errors in the attached\n>>> session_variables.out.\n>>>\n>>\n>>\n>> Sorry, that should have been this diffs file, of course (attached).\n>>\n> \n> It looks like some problem with not well initialized memory, but I have no\n> idea how it is possible. What are your configure options?\n> \n\nI compiled both assert-enable and 'normal', and I only just noticed that \nthe assert-enable one did pass tests normally.\n\n\nBelow is the config that produced the failing tests:\n\n./configure \n--prefix=/home/aardvark/pg_stuff/pg_installations/pgsql.schema_variables \n--bindir=/home/aardvark/pg_stuff/pg_installations/pgsql.schema_variables/bin.fast \n--libdir=/home/aardvark/pg_stuff/pg_installations/pgsql.schema_variables/lib.fast \n--with-pgport=6986 --quiet --enable-depend --with-openssl --with-perl \n--with-libxml --with-libxslt --with-zlib --enable-tap-tests \n--with-extra-version=_0820_schema_variables_1509 --with-lz4 --with-icu\n\n\n(debian 9, gcc 12.2.0)\n\n> \n>>\n>> Erik\n> \n\n\n", "msg_date": "Sat, 20 Aug 2022 15:55:07 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Sat, Aug 20, 2022 at 03:55:07PM +0200, Erik Rijkers wrote:\n>\n> Op 20-08-2022 om 15:41 schreef Pavel Stehule:\n> > so 20. 8. 2022 v 15:36 odes�latel Erik Rijkers <er@xs4all.nl> napsal:\n> >\n> > > Op 20-08-2022 om 15:32 schreef Erik Rijkers:\n> > > > Op 19-08-2022 om 17:29 schreef Pavel Stehule:\n> > > >\n> > > > make check fails as a result of the errors in the attached\n> > > > session_variables.out.\n> > > >\n> > >\n> > >\n> > > Sorry, that should have been this diffs file, of course (attached).\n> > >\n> >\n> > It looks like some problem with not well initialized memory, but I have no\n> > idea how it is possible. What are your configure options?\n> >\n>\n> I compiled both assert-enable and 'normal', and I only just noticed that the\n> assert-enable one did pass tests normally.\n>\n>\n> Below is the config that produced the failing tests:\n>\n> ./configure\n> --prefix=/home/aardvark/pg_stuff/pg_installations/pgsql.schema_variables --bindir=/home/aardvark/pg_stuff/pg_installations/pgsql.schema_variables/bin.fast --libdir=/home/aardvark/pg_stuff/pg_installations/pgsql.schema_variables/lib.fast\n> --with-pgport=6986 --quiet --enable-depend --with-openssl --with-perl\n> --with-libxml --with-libxslt --with-zlib --enable-tap-tests\n> --with-extra-version=_0820_schema_variables_1509 --with-lz4 --with-icu\n\nI also tried locally (didn't look at the patch yet), with debug/assert enabled,\nand had similar error:\n\ndiff -dU10 /Users/rjuju/git/postgresql/src/test/regress/expected/session_variables.out /Users/rjuju/git/pg/pgmaster_debug/src/test/regress/results/session_variables.out\n--- /Users/rjuju/git/postgresql/src/test/regress/expected/session_variables.out\t2022-08-20 22:25:17.000000000 +0800\n+++ /Users/rjuju/git/pg/pgmaster_debug/src/test/regress/results/session_variables.out\t2022-08-20 22:30:50.000000000 +0800\n@@ -983,23 +983,23 @@\n -- should to fail\n SELECT public.svar;\n svar\n ---------\n (10,20)\n (1 row)\n\n ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int;\n -- should to fail too (different type, different generation number);\n SELECT public.svar;\n- svar\n-----------\n- (10,20,)\n+ svar\n+--------------------\n+ (10,20,2139062142)\n (1 row)\n\n LET public.svar = ROW(10,20,30);\n -- should be ok again for new value\n SELECT public.svar;\n svar\n ------------\n (10,20,30)\n (1 row)\n\n@@ -1104,31 +1104,31 @@\n (1 row)\n\n DROP VARIABLE public.svar;\n DROP TYPE public.svar_test_type;\n CREATE TYPE public.svar_test_type AS (a int, b int);\n CREATE VARIABLE public.svar AS public.svar_test_type;\n CREATE VARIABLE public.svar2 AS public.svar_test_type;\n LET public.svar = (10, 20);\n ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int;\n SELECT public.svar;\n- svar\n-----------\n- (10,20,)\n+ svar\n+------------\n+ (10,20,16)\n (1 row)\n\n LET public.svar2 = (10, 20, 30);\n ALTER TYPE public.svar_test_type DROP ATTRIBUTE b;\n SELECT public.svar;\n- svar\n--------\n- (10,)\n+ svar\n+---------\n+ (10,16)\n (1 row)\n\n SELECT public.svar2;\n svar2\n ---------\n (10,30)\n (1 row)\n\n\n", "msg_date": "Sat, 20 Aug 2022 22:35:05 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nso 20. 8. 2022 v 16:35 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> On Sat, Aug 20, 2022 at 03:55:07PM +0200, Erik Rijkers wrote:\n> >\n> > Op 20-08-2022 om 15:41 schreef Pavel Stehule:\n> > > so 20. 8. 2022 v 15:36 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n> > >\n> > > > Op 20-08-2022 om 15:32 schreef Erik Rijkers:\n> > > > > Op 19-08-2022 om 17:29 schreef Pavel Stehule:\n> > > > >\n> > > > > make check fails as a result of the errors in the attached\n> > > > > session_variables.out.\n> > > > >\n> > > >\n> > > >\n> > > > Sorry, that should have been this diffs file, of course (attached).\n> > > >\n> > >\n> > > It looks like some problem with not well initialized memory, but I\n> have no\n> > > idea how it is possible. What are your configure options?\n> > >\n> >\n> > I compiled both assert-enable and 'normal', and I only just noticed that\n> the\n> > assert-enable one did pass tests normally.\n> >\n> >\n> > Below is the config that produced the failing tests:\n> >\n> > ./configure\n> > --prefix=/home/aardvark/pg_stuff/pg_installations/pgsql.schema_variables\n> --bindir=/home/aardvark/pg_stuff/pg_installations/pgsql.schema_variables/bin.fast\n> --libdir=/home/aardvark/pg_stuff/pg_installations/pgsql.schema_variables/lib.fast\n> > --with-pgport=6986 --quiet --enable-depend --with-openssl --with-perl\n> > --with-libxml --with-libxslt --with-zlib --enable-tap-tests\n> > --with-extra-version=_0820_schema_variables_1509 --with-lz4\n> --with-icu\n>\n> I also tried locally (didn't look at the patch yet), with debug/assert\n> enabled,\n> and had similar error:\n>\n> diff -dU10\n> /Users/rjuju/git/postgresql/src/test/regress/expected/session_variables.out\n> /Users/rjuju/git/pg/pgmaster_debug/src/test/regress/results/session_variables.out\n> ---\n> /Users/rjuju/git/postgresql/src/test/regress/expected/session_variables.out\n> 2022-08-20 22:25:17.000000000 +0800\n> +++\n> /Users/rjuju/git/pg/pgmaster_debug/src/test/regress/results/session_variables.out\n> 2022-08-20 22:30:50.000000000 +0800\n> @@ -983,23 +983,23 @@\n> -- should to fail\n> SELECT public.svar;\n> svar\n> ---------\n> (10,20)\n> (1 row)\n>\n> ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int;\n> -- should to fail too (different type, different generation number);\n> SELECT public.svar;\n> - svar\n> -----------\n> - (10,20,)\n> + svar\n> +--------------------\n> + (10,20,2139062142)\n> (1 row)\n>\n> LET public.svar = ROW(10,20,30);\n> -- should be ok again for new value\n> SELECT public.svar;\n> svar\n> ------------\n> (10,20,30)\n> (1 row)\n>\n> @@ -1104,31 +1104,31 @@\n> (1 row)\n>\n> DROP VARIABLE public.svar;\n> DROP TYPE public.svar_test_type;\n> CREATE TYPE public.svar_test_type AS (a int, b int);\n> CREATE VARIABLE public.svar AS public.svar_test_type;\n> CREATE VARIABLE public.svar2 AS public.svar_test_type;\n> LET public.svar = (10, 20);\n> ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int;\n> SELECT public.svar;\n> - svar\n> -----------\n> - (10,20,)\n> + svar\n> +------------\n> + (10,20,16)\n> (1 row)\n>\n> LET public.svar2 = (10, 20, 30);\n> ALTER TYPE public.svar_test_type DROP ATTRIBUTE b;\n> SELECT public.svar;\n> - svar\n> --------\n> - (10,)\n> + svar\n> +---------\n> + (10,16)\n> (1 row)\n>\n> SELECT public.svar2;\n> svar2\n> ---------\n> (10,30)\n> (1 row)\n>\n\nI hope so I found this error. It should be fixed\n\nRegards\n\nPavel", "msg_date": "Sat, 20 Aug 2022 20:09:20 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "pá 19. 8. 2022 v 22:54 odesílatel Alvaro Herrera <alvherre@alvh.no-ip.org>\nnapsal:\n\n> > diff --git a/src/backend/parser/parse_relation.c\n> b/src/backend/parser/parse_relation.c\n> > index f6b740df0a..b3bee39457 100644\n> > --- a/src/backend/parser/parse_relation.c\n> > +++ b/src/backend/parser/parse_relation.c\n> > @@ -3667,8 +3667,8 @@ errorMissingColumn(ParseState *pstate,\n> > ereport(ERROR,\n> > (errcode(ERRCODE_UNDEFINED_COLUMN),\n> > relname ?\n> > - errmsg(\"column %s.%s does not exist\",\n> relname, colname) :\n> > - errmsg(\"column \\\"%s\\\" does not exist\",\n> colname),\n> > + errmsg(\"column or variable %s.%s does not\n> exist\", relname, colname) :\n> > + errmsg(\"column or variable \\\"%s\\\" does\n> not exist\", colname),\n> > state->rfirst ? closestfirst ?\n> > errhint(\"Perhaps you meant to reference\n> the column \\\"%s.%s\\\".\",\n> >\n> state->rfirst->eref->aliasname, closestfirst) :\n>\n> This is in your patch 12. I wonder -- if relname is not null, then\n> surely this is a column and not a variable, right? So only the second\n> errmsg() here should be changed, and the first one should remain as in\n> the original.\n>\n\nYes, it should work. I changed it in today patch\n\nThank you for tip\n\nRegards\n\nPavel\n\n\n>\n> --\n> Álvaro Herrera PostgreSQL Developer —\n> https://www.EnterpriseDB.com/\n>\n\npá 19. 8. 2022 v 22:54 odesílatel Alvaro Herrera <alvherre@alvh.no-ip.org> napsal:> diff --git a/src/backend/parser/parse_relation.c b/src/backend/parser/parse_relation.c\n> index f6b740df0a..b3bee39457 100644\n> --- a/src/backend/parser/parse_relation.c\n> +++ b/src/backend/parser/parse_relation.c\n> @@ -3667,8 +3667,8 @@ errorMissingColumn(ParseState *pstate,\n>               ereport(ERROR,\n>                               (errcode(ERRCODE_UNDEFINED_COLUMN),\n>                                relname ?\n> -                              errmsg(\"column %s.%s does not exist\", relname, colname) :\n> -                              errmsg(\"column \\\"%s\\\" does not exist\", colname),\n> +                              errmsg(\"column or variable %s.%s does not exist\", relname, colname) :\n> +                              errmsg(\"column or variable \\\"%s\\\" does not exist\", colname),\n>                                state->rfirst ? closestfirst ?\n>                                errhint(\"Perhaps you meant to reference the column \\\"%s.%s\\\".\",\n>                                                state->rfirst->eref->aliasname, closestfirst) :\n\nThis is in your patch 12.  I wonder -- if relname is not null, then\nsurely this is a column and not a variable, right?  So only the second\nerrmsg() here should be changed, and the first one should remain as in\nthe original.Yes, it should work. I changed it in today patchThank you for tipRegardsPavel \n\n-- \nÁlvaro Herrera         PostgreSQL Developer  —  https://www.EnterpriseDB.com/", "msg_date": "Sat, 20 Aug 2022 20:10:39 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Op 20-08-2022 om 20:09 schreef Pavel Stehule:\n> Hi\n> \n>> LET public.svar2 = (10, 20, 30);\n>> ALTER TYPE public.svar_test_type DROP ATTRIBUTE b;\n>> SELECT public.svar;\n>> - svar\n>> --------\n>> - (10,)\n>> + svar\n>> +---------\n>> + (10,16)\n>> (1 row)\n>>\n>> SELECT public.svar2;\n>> svar2\n>> ---------\n>> (10,30)\n>> (1 row)\n>>\n> \n> I hope so I found this error. It should be fixed\n> > [patches v20220820-1-0001 -> 0012]\n\n\nI'm afraid it still gives the same errors during 'make check', and \nagain only errors when compiling without --enable-cassert\n\nThanks,\n\nErik\n\n\n> Regards\n> \n> Pavel\n>", "msg_date": "Sat, 20 Aug 2022 20:44:49 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Sat, Aug 20, 2022 at 08:44:49PM +0200, Erik Rijkers wrote:\n> Op 20-08-2022 om 20:09 schreef Pavel Stehule:\n> > Hi\n> > \n> > > LET public.svar2 = (10, 20, 30);\n> > > ALTER TYPE public.svar_test_type DROP ATTRIBUTE b;\n> > > SELECT public.svar;\n> > > - svar\n> > > --------\n> > > - (10,)\n> > > + svar\n> > > +---------\n> > > + (10,16)\n> > > (1 row)\n> > > \n> > > SELECT public.svar2;\n> > > svar2\n> > > ---------\n> > > (10,30)\n> > > (1 row)\n> > > \n> > \n> > I hope so I found this error. It should be fixed\n> > > [patches v20220820-1-0001 -> 0012]\n> \n> \n> I'm afraid it still gives the same errors during 'make check', and again\n> only errors when compiling without --enable-cassert\n\nIt still fails for me for both --enable-cassert and --disable-cassert, with a\ndifferent number of errors though.\n\nThe cfbot is green, but it's unclear to me which version was applied on the\nlast run. AFAICS there's no log available for the branch creation if it\nsucceeds.\n\n--enable-cassert:\n\n LET public.svar = (10, 20);\n ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int;\n SELECT public.svar;\n- svar\n-----------\n- (10,20,)\n+ svar\n+------------\n+ (10,20,16)\n (1 row)\n\n LET public.svar2 = (10, 20, 30);\n ALTER TYPE public.svar_test_type DROP ATTRIBUTE b;\n SELECT public.svar;\n- svar\n--------\n- (10,)\n+ svar\n+---------\n+ (10,16)\n (1 row)\n\n\n\n--disable-cassert:\n\n ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int;\n -- should to fail too (different type, different generation number);\n SELECT public.svar;\n- svar\n-----------\n- (10,20,)\n+ svar\n+------------\n+ (10,20,32)\n (1 row)\n\n LET public.svar = ROW(10,20,30);\n -- should be ok again for new value\n SELECT public.svar;\n svar\n ------------\n (10,20,30)\n (1 row)\n\n@@ -1104,31 +1104,31 @@\n (1 row)\n\n DROP VARIABLE public.svar;\n DROP TYPE public.svar_test_type;\n CREATE TYPE public.svar_test_type AS (a int, b int);\n CREATE VARIABLE public.svar AS public.svar_test_type;\n CREATE VARIABLE public.svar2 AS public.svar_test_type;\n LET public.svar = (10, 20);\n ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int;\n SELECT public.svar;\n- svar\n-----------\n- (10,20,)\n+ svar\n+------------\n+ (10,20,16)\n (1 row)\n\n LET public.svar2 = (10, 20, 30);\n ALTER TYPE public.svar_test_type DROP ATTRIBUTE b;\n SELECT public.svar;\n- svar\n--------\n- (10,)\n+ svar\n+---------\n+ (10,16)\n (1 row)\n\n\n\n", "msg_date": "Sun, 21 Aug 2022 12:36:21 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "ne 21. 8. 2022 v 6:36 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> On Sat, Aug 20, 2022 at 08:44:49PM +0200, Erik Rijkers wrote:\n> > Op 20-08-2022 om 20:09 schreef Pavel Stehule:\n> > > Hi\n> > >\n> > > > LET public.svar2 = (10, 20, 30);\n> > > > ALTER TYPE public.svar_test_type DROP ATTRIBUTE b;\n> > > > SELECT public.svar;\n> > > > - svar\n> > > > --------\n> > > > - (10,)\n> > > > + svar\n> > > > +---------\n> > > > + (10,16)\n> > > > (1 row)\n> > > >\n> > > > SELECT public.svar2;\n> > > > svar2\n> > > > ---------\n> > > > (10,30)\n> > > > (1 row)\n> > > >\n> > >\n> > > I hope so I found this error. It should be fixed\n> > > > [patches v20220820-1-0001 -> 0012]\n> >\n> >\n> > I'm afraid it still gives the same errors during 'make check', and again\n> > only errors when compiling without --enable-cassert\n>\n> It still fails for me for both --enable-cassert and --disable-cassert,\n> with a\n> different number of errors though.\n>\n> The cfbot is green, but it's unclear to me which version was applied on the\n> last run. AFAICS there's no log available for the branch creation if it\n> succeeds.\n>\n> --enable-cassert:\n>\n> LET public.svar = (10, 20);\n> ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int;\n> SELECT public.svar;\n> - svar\n> -----------\n> - (10,20,)\n> + svar\n> +------------\n> + (10,20,16)\n> (1 row)\n>\n> LET public.svar2 = (10, 20, 30);\n> ALTER TYPE public.svar_test_type DROP ATTRIBUTE b;\n> SELECT public.svar;\n> - svar\n> --------\n> - (10,)\n> + svar\n> +---------\n> + (10,16)\n> (1 row)\n>\n>\n>\n> --disable-cassert:\n>\n> ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int;\n> -- should to fail too (different type, different generation number);\n> SELECT public.svar;\n> - svar\n> -----------\n> - (10,20,)\n> + svar\n> +------------\n> + (10,20,32)\n> (1 row)\n>\n> LET public.svar = ROW(10,20,30);\n> -- should be ok again for new value\n> SELECT public.svar;\n> svar\n> ------------\n> (10,20,30)\n> (1 row)\n>\n> @@ -1104,31 +1104,31 @@\n> (1 row)\n>\n> DROP VARIABLE public.svar;\n> DROP TYPE public.svar_test_type;\n> CREATE TYPE public.svar_test_type AS (a int, b int);\n> CREATE VARIABLE public.svar AS public.svar_test_type;\n> CREATE VARIABLE public.svar2 AS public.svar_test_type;\n> LET public.svar = (10, 20);\n> ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int;\n> SELECT public.svar;\n> - svar\n> -----------\n> - (10,20,)\n> + svar\n> +------------\n> + (10,20,16)\n> (1 row)\n>\n> LET public.svar2 = (10, 20, 30);\n> ALTER TYPE public.svar_test_type DROP ATTRIBUTE b;\n> SELECT public.svar;\n> - svar\n> --------\n> - (10,)\n> + svar\n> +---------\n> + (10,16)\n> (1 row)\n>\n\nI got the same result, when I did build without assertions, so I can debug\nit now.\n\nne 21. 8. 2022 v 6:36 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Sat, Aug 20, 2022 at 08:44:49PM +0200, Erik Rijkers wrote:\n> Op 20-08-2022 om 20:09 schreef Pavel Stehule:\n> > Hi\n> > \n> > >   LET public.svar2 = (10, 20, 30);\n> > >   ALTER TYPE public.svar_test_type DROP ATTRIBUTE b;\n> > >   SELECT public.svar;\n> > > - svar\n> > > --------\n> > > - (10,)\n> > > +  svar\n> > > +---------\n> > > + (10,16)\n> > >   (1 row)\n> > > \n> > >   SELECT public.svar2;\n> > >     svar2\n> > >   ---------\n> > >    (10,30)\n> > >   (1 row)\n> > > \n> > \n> > I hope so I found this error. It should be fixed\n> >  > [patches v20220820-1-0001 -> 0012]\n> \n> \n> I'm afraid it still gives the same errors during  'make check', and again\n> only errors when compiling  without  --enable-cassert\n\nIt still fails for me for both --enable-cassert and --disable-cassert, with a\ndifferent number of errors though.\n\nThe cfbot is green, but it's unclear to me which version was applied on the\nlast run.  AFAICS there's no log available for the branch creation if it\nsucceeds.\n\n--enable-cassert:\n\n LET public.svar = (10, 20);\n ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int;\n SELECT public.svar;\n-   svar\n-----------\n- (10,20,)\n+    svar\n+------------\n+ (10,20,16)\n (1 row)\n\n LET public.svar2 = (10, 20, 30);\n ALTER TYPE public.svar_test_type DROP ATTRIBUTE b;\n SELECT public.svar;\n- svar\n--------\n- (10,)\n+  svar\n+---------\n+ (10,16)\n (1 row)\n\n\n\n--disable-cassert:\n\n ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int;\n -- should to fail too (different type, different generation number);\n SELECT public.svar;\n-   svar\n-----------\n- (10,20,)\n+    svar\n+------------\n+ (10,20,32)\n (1 row)\n\n LET public.svar = ROW(10,20,30);\n -- should be ok again for new value\n SELECT public.svar;\n     svar\n ------------\n  (10,20,30)\n (1 row)\n\n@@ -1104,31 +1104,31 @@\n (1 row)\n\n DROP VARIABLE public.svar;\n DROP TYPE public.svar_test_type;\n CREATE TYPE public.svar_test_type AS (a int, b int);\n CREATE VARIABLE public.svar AS public.svar_test_type;\n CREATE VARIABLE public.svar2 AS public.svar_test_type;\n LET public.svar = (10, 20);\n ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int;\n SELECT public.svar;\n-   svar\n-----------\n- (10,20,)\n+    svar\n+------------\n+ (10,20,16)\n (1 row)\n\n LET public.svar2 = (10, 20, 30);\n ALTER TYPE public.svar_test_type DROP ATTRIBUTE b;\n SELECT public.svar;\n- svar\n--------\n- (10,)\n+  svar\n+---------\n+ (10,16)\n (1 row)I got the same result, when I did build without assertions, so I can debug it now.", "msg_date": "Sun, 21 Aug 2022 07:51:07 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "ne 21. 8. 2022 v 6:36 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> On Sat, Aug 20, 2022 at 08:44:49PM +0200, Erik Rijkers wrote:\n> > Op 20-08-2022 om 20:09 schreef Pavel Stehule:\n> > > Hi\n> > >\n> > > > LET public.svar2 = (10, 20, 30);\n> > > > ALTER TYPE public.svar_test_type DROP ATTRIBUTE b;\n> > > > SELECT public.svar;\n> > > > - svar\n> > > > --------\n> > > > - (10,)\n> > > > + svar\n> > > > +---------\n> > > > + (10,16)\n> > > > (1 row)\n> > > >\n> > > > SELECT public.svar2;\n> > > > svar2\n> > > > ---------\n> > > > (10,30)\n> > > > (1 row)\n> > > >\n> > >\n> > > I hope so I found this error. It should be fixed\n> > > > [patches v20220820-1-0001 -> 0012]\n> >\n> >\n> > I'm afraid it still gives the same errors during 'make check', and again\n> > only errors when compiling without --enable-cassert\n>\n> It still fails for me for both --enable-cassert and --disable-cassert,\n> with a\n> different number of errors though.\n>\n> The cfbot is green, but it's unclear to me which version was applied on the\n> last run. AFAICS there's no log available for the branch creation if it\n> succeeds.\n>\n> --enable-cassert:\n>\n> LET public.svar = (10, 20);\n> ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int;\n> SELECT public.svar;\n> - svar\n> -----------\n> - (10,20,)\n> + svar\n> +------------\n> + (10,20,16)\n> (1 row)\n>\n> LET public.svar2 = (10, 20, 30);\n> ALTER TYPE public.svar_test_type DROP ATTRIBUTE b;\n> SELECT public.svar;\n> - svar\n> --------\n> - (10,)\n> + svar\n> +---------\n> + (10,16)\n> (1 row)\n>\n>\n>\n> --disable-cassert:\n>\n> ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int;\n> -- should to fail too (different type, different generation number);\n> SELECT public.svar;\n> - svar\n> -----------\n> - (10,20,)\n> + svar\n> +------------\n> + (10,20,32)\n> (1 row)\n>\n> LET public.svar = ROW(10,20,30);\n> -- should be ok again for new value\n> SELECT public.svar;\n> svar\n> ------------\n> (10,20,30)\n> (1 row)\n>\n> @@ -1104,31 +1104,31 @@\n> (1 row)\n>\n> DROP VARIABLE public.svar;\n> DROP TYPE public.svar_test_type;\n> CREATE TYPE public.svar_test_type AS (a int, b int);\n> CREATE VARIABLE public.svar AS public.svar_test_type;\n> CREATE VARIABLE public.svar2 AS public.svar_test_type;\n> LET public.svar = (10, 20);\n> ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int;\n> SELECT public.svar;\n> - svar\n> -----------\n> - (10,20,)\n> + svar\n> +------------\n> + (10,20,16)\n> (1 row)\n>\n> LET public.svar2 = (10, 20, 30);\n> ALTER TYPE public.svar_test_type DROP ATTRIBUTE b;\n> SELECT public.svar;\n> - svar\n> --------\n> - (10,)\n> + svar\n> +---------\n> + (10,16)\n> (1 row)\n>\n\nshould be fixed now\n\nThank you for check\n\nPavel", "msg_date": "Sun, 21 Aug 2022 09:54:03 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Op 21-08-2022 om 09:54 schreef Pavel Stehule:\n> ne 21. 8. 2022 v 6:36 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n> \n>> On Sat, Aug 20, 2022 at 08:44:49PM +0200, Erik Rijkers wrote:\n>>> Op 20-08-2022 om 20:09 schreef Pavel Stehule:\n\n> \n> should be fixed now> \n\n\nYep, all tests OK now.\nThanks!\n\nErik\n\n\n\n", "msg_date": "Sun, 21 Aug 2022 10:15:11 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi Pavel,\n\nOn Sun, Aug 21, 2022 at 09:54:03AM +0200, Pavel Stehule wrote:\n> \n> should be fixed now\n\nI started reviewing the patchset, beginning with 0001 (at least the parts that\ndon't substantially change later) and have a few comments.\n\n- you define new AclMode READ and WRITE. Those bits are precious and I don't\n think it's ok to consume 2 bits for session variables, especially since those\n are the last two bits available since the recent GUC access control patch\n (ACL_SET and ACL_ALTER_SYSTEM). Maybe we could existing INSERT and UPDATE\n privileges instead, like it's done for sequences?\n\n- make check and make-check-world don't pass with this test only. Given that\n the split is mostly done to ease review and probably not intended to be\n committed this way, we probably shouldn't spend efforts to clean up the split\n apart from making sure that each patch compiles cleanly on its own. But in\n this case it brought my attention to misc_sanity.sql test. Looking at patch\n 0010, I see:\n\ndiff --git a/src/test/regress/expected/misc_sanity.out b/src/test/regress/expected/misc_sanity.out\nindex a57fd142a9..ce9bad7211 100644\n--- a/src/test/regress/expected/misc_sanity.out\n+++ b/src/test/regress/expected/misc_sanity.out\n@@ -60,7 +60,9 @@ ORDER BY 1, 2;\n pg_index | indpred | pg_node_tree\n pg_largeobject | data | bytea\n pg_largeobject_metadata | lomacl | aclitem[]\n-(11 rows)\n+ pg_variable | varacl | aclitem[]\n+ pg_variable | vardefexpr | pg_node_tree\n+(13 rows)\n\nThis is the test for relations with varlena columns without TOAST table. I\ndon't think that's correct to add those exceptions, and there should be a TOAST\ntable declared for pg_variable too, as noted in the comment above that query.\n\n- nitpicking: s/catalogue/catalog/\n\nSome other comments on other patches while testing things around:\n\n- For sessionvariable.c (in 0002), I see that there are still all the comments\n and code about checking type validity based on a generation number and other\n heuristics. I still fail to understand why this is needed at all as the\n stored datum should remain compatible as long as we prevent the few\n incompatible DDL that are also prevented when there's a relation dependency.\n As an example, I try to quickly disable all that code with the following:\n\ndiff --git a/src/backend/commands/sessionvariable.c b/src/backend/commands/sessionvariable.c\nindex 9b4f9482a4..7c8808dc46 100644\n--- a/src/backend/commands/sessionvariable.c\n+++ b/src/backend/commands/sessionvariable.c\n@@ -794,6 +794,8 @@ svartype_verify_composite_fast(SVariableType svt)\n static int64\n get_svariable_valid_type_gennum(SVariableType svt)\n {\n+ return 1;\n+\n HeapTuple tuple;\n bool fast_check = true;\n\n@@ -905,6 +907,8 @@ get_svariabletype(Oid typid)\n static bool\n session_variable_use_valid_type(SVariable svar)\n {\n+ return true;\n+\n Assert(svar);\n Assert(svar->svartype);\n\nAnd session_variable.sql regression test still works just fine. Am I missing\nsomething?\n\nWhile at it, the initial comment should probably say \"free local memory\" rather\nthan \"purge memory\".\n\n- doc are missing for GRANT/REVOKE ... ON ALL VARIABLES\n\n- plpgsql.sgml:\n+ <sect3>\n+ <title><command>Session variables and constants</command></title>\n\nI don't think it's ok to use \"constant\" as an alias for immutable session\nvariable as immutable session variable can actually be changed.\n\nSimilarly, in catalogs.sgml:\n\n+ <structfield>varisimmutable</structfield> <type>boolean</type>\n+ </para>\n+ <para>\n+ True if the variable is immutable (cannot be modified). The default value is false.\n+ </para></entry>\n+ </row>\n\nI think there should be a note and a link to the corresponding part in\ncreate_variable.sgml to explain what exactly is an immutable variable, as the\nimplemented behavior (for nullable immutable variable) is somewhat unexpected.\n\n- other nitpicking: pg_variable and struct Variable seems a bit inconsistent.\n For instance one uses vartype and vartypmod and the other typid and typmod,\n while both use varname and varnamespace. I think we should avoid discrepancy\n here.\n\nAlso, there's a sessionvariable.c and a session_variable.h. Let's use\nsession_variable.[ch], as it seems more readable?\n\n-typedef patch: missing SVariableTypeData, some commits need a pgindent, e.g:\n\n+typedef SVariableTypeData * SVariableType;\n\n+typedef SVariableData * SVariable;\n\n+static SessionVariableValue * RestoreSessionVariables(char **start_address,\n+ int *num_session_variables);\n\n+static Query *transformLetStmt(ParseState *pstate,\n+ LetStmt * stmt);\n\n(and multiple others)\n\n\n", "msg_date": "Mon, 22 Aug 2022 15:33:48 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "> +-- test on query with workers\n> +CREATE TABLE svar_test(a int);\n> +INSERT INTO svar_test SELECT * FROM generate_series(1,1000000);\n\nWhen I looked at this, I noticed this huge table.\n\nI don't think you should create such a large table just for this.\n\nTo exercise parallel workers with a smaller table, decrease\nmin_parallel_table_scan_size and others as done in other regression tests.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 22 Aug 2022 09:05:43 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "po 22. 8. 2022 v 16:05 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> > +-- test on query with workers\n> > +CREATE TABLE svar_test(a int);\n> > +INSERT INTO svar_test SELECT * FROM generate_series(1,1000000);\n>\n> When I looked at this, I noticed this huge table.\n>\n> I don't think you should create such a large table just for this.\n>\n> To exercise parallel workers with a smaller table, decrease\n> min_parallel_table_scan_size and others as done in other regression tests.\n>\n>\nI fixed it.\n\nThank you for tip\n\nPavel\n\n> --\n> Justin\n>\n\npo 22. 8. 2022 v 16:05 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:> +-- test on query with workers\n> +CREATE TABLE svar_test(a int);\n> +INSERT INTO svar_test SELECT * FROM generate_series(1,1000000);\n\nWhen I looked at this, I noticed this huge table.\n\nI don't think you should create such a large table just for this.\n\nTo exercise parallel workers with a smaller table, decrease\nmin_parallel_table_scan_size and others as done in other regression tests.\nI fixed it. Thank you for tipPavel \n-- \nJustin", "msg_date": "Mon, 22 Aug 2022 20:43:39 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "po 22. 8. 2022 v 9:33 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi Pavel,\n>\n> On Sun, Aug 21, 2022 at 09:54:03AM +0200, Pavel Stehule wrote:\n> >\n> > should be fixed now\n>\n> I started reviewing the patchset, beginning with 0001 (at least the parts\n> that\n> don't substantially change later) and have a few comments.\n>\n> - you define new AclMode READ and WRITE. Those bits are precious and I\n> don't\n> think it's ok to consume 2 bits for session variables, especially since\n> those\n> are the last two bits available since the recent GUC access control patch\n> (ACL_SET and ACL_ALTER_SYSTEM). Maybe we could existing INSERT and\n> UPDATE\n> privileges instead, like it's done for sequences?\n>\n>\nI have not a strong opinion about it. AclMode is uint32 - so I think there\nare still 15bites reserved. I think so UPDATE and SELECT rights can work,\nbut maybe it is better to use separate rights WRITE, READ to be stronger\nsignalized so the variable is not the relation. On other hand large objects\nuse ACL_UPDATE, ACL_SELECT too, and it works. So I am neutral in this\nquestion. Has somebody here some opinion on this point? If not I'll modify\nthe patch like Julien proposes.\n\nRegards\n\nPavel\n\npo 22. 8. 2022 v 9:33 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:Hi Pavel,\n\nOn Sun, Aug 21, 2022 at 09:54:03AM +0200, Pavel Stehule wrote:\n> \n> should be fixed now\n\nI started reviewing the patchset, beginning with 0001 (at least the parts that\ndon't substantially change later) and have a few comments.\n\n- you define new AclMode READ and WRITE.  Those bits are precious and I don't\n  think it's ok to consume 2 bits for session variables, especially since those\n  are the last two bits available since the recent GUC access control patch\n  (ACL_SET and ACL_ALTER_SYSTEM).  Maybe we could existing INSERT and UPDATE\n  privileges instead, like it's done for sequences?\nI have not a strong opinion about it.  AclMode is uint32 - so I think there are still 15bites reserved. I think so UPDATE and SELECT rights can work, but maybe it is better to use separate rights WRITE, READ to be stronger signalized so the variable is not the relation. On other hand large objects use ACL_UPDATE, ACL_SELECT too, and it works. So I am neutral in this question. Has somebody here some opinion on this point? If not I'll modify the patch like Julien proposes.RegardsPavel", "msg_date": "Mon, 22 Aug 2022 21:13:39 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Mon, Aug 22, 2022 at 09:13:39PM +0200, Pavel Stehule wrote:\n> po 22. 8. 2022 v 9:33 odes�latel Julien Rouhaud <rjuju123@gmail.com> napsal:\n>\n> >\n> > - you define new AclMode READ and WRITE. Those bits are precious and I\n> > don't\n> > think it's ok to consume 2 bits for session variables, especially since\n> > those\n> > are the last two bits available since the recent GUC access control patch\n> > (ACL_SET and ACL_ALTER_SYSTEM). Maybe we could existing INSERT and\n> > UPDATE\n> > privileges instead, like it's done for sequences?\n> >\n> >\n> I have not a strong opinion about it. AclMode is uint32 - so I think there\n> are still 15bites reserved. I think so UPDATE and SELECT rights can work,\n> but maybe it is better to use separate rights WRITE, READ to be stronger\n> signalized so the variable is not the relation. On other hand large objects\n> use ACL_UPDATE, ACL_SELECT too, and it works. So I am neutral in this\n> question. Has somebody here some opinion on this point? If not I'll modify\n> the patch like Julien proposes.\n\nActually no, because AclMode is also used to store the grant option part. The\ncomment before AclMode warns about it:\n\n * The present representation of AclItem limits us to 16 distinct rights,\n * even though AclMode is defined as uint32. See utils/acl.h.\n\n\n", "msg_date": "Tue, 23 Aug 2022 09:57:37 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nOn Tue, Jan 18, 2022 at 10:01:01PM +0100, Pavel Stehule wrote:\n>\n> pá 14. 1. 2022 v 3:44 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n>\n> > On Thu, Jan 13, 2022 at 07:32:26PM +0100, Pavel Stehule wrote:\n> > > čt 13. 1. 2022 v 19:23 odesílatel Dean Rasheed <dean.a.rasheed@gmail.com\n> > >\n> > > > On Thu, 13 Jan 2022 at 17:42, Pavel Stehule <pavel.stehule@gmail.com>\n> > > > wrote:\n> > > > >\n> > > > > I like the idea of prioritizing tables over variables with warnings\n> > when\n> > > > collision is detected. It cannot break anything. And it allows to using\n> > > > short identifiers when there is not collision.\n> > > >\n> > > > Yeah, that seems OK, as long as it's clearly documented. I don't think\n> > > > a warning is necessary.\n> >\n> > What should be the behavior for a cached plan that uses a variable when a\n> > conflicting relation is later created? I think that it should be the same\n> > as a\n> > search_path change and the plan should be discarded.\n> >\n> > > The warning can be disabled by default, but I think it should be there.\n> > > This is a signal, so some in the database schema should be renamed.\n> > Maybe -\n> > > session_variables_ambiguity_warning.\n> >\n> > I agree that having a way to know that a variable has been bypassed can be\n> > useful.\n> >\n>\n> done\n\nI've been thinking a bit more about the shadowing, and one scenario we didn't\ndiscuss is something like this naive example:\n\nCREATE TABLE tt(a text, b text);\n\nCREATE TYPE abc AS (a text, b text, c text);\nCREATE VARIABLE tt AS abc;\n\nINSERT INTO tt SELECT 'a', 'b';\nLET tt = ('x', 'y', 'z');\n\nSELECT tt.a, tt.b, tt.c FROM tt;\n\nWhich, with the default configuration, currently returns\n\n a | b | c\n---+---+---\n a | b | z\n(1 row)\n\nI feel a bit uncomfortable that the system allows mixing variable attributes\nand relation columns for the same relation name. This is even worse here as\npart of the variable attributes are shadowed.\n\nIt feels like a good way to write valid queries that clearly won't do what you\nthink they do, a bit like the correlated sub-query trap, so maybe we should\nhave a way to prevent it.\n\nWhat do you think?\n\n\n", "msg_date": "Tue, 23 Aug 2022 13:56:11 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "út 23. 8. 2022 v 7:56 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi,\n>\n> On Tue, Jan 18, 2022 at 10:01:01PM +0100, Pavel Stehule wrote:\n> >\n> > pá 14. 1. 2022 v 3:44 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n> napsal:\n> >\n> > > On Thu, Jan 13, 2022 at 07:32:26PM +0100, Pavel Stehule wrote:\n> > > > čt 13. 1. 2022 v 19:23 odesílatel Dean Rasheed <\n> dean.a.rasheed@gmail.com\n> > > >\n> > > > > On Thu, 13 Jan 2022 at 17:42, Pavel Stehule <\n> pavel.stehule@gmail.com>\n> > > > > wrote:\n> > > > > >\n> > > > > > I like the idea of prioritizing tables over variables with\n> warnings\n> > > when\n> > > > > collision is detected. It cannot break anything. And it allows to\n> using\n> > > > > short identifiers when there is not collision.\n> > > > >\n> > > > > Yeah, that seems OK, as long as it's clearly documented. I don't\n> think\n> > > > > a warning is necessary.\n> > >\n> > > What should be the behavior for a cached plan that uses a variable\n> when a\n> > > conflicting relation is later created? I think that it should be the\n> same\n> > > as a\n> > > search_path change and the plan should be discarded.\n> > >\n> > > > The warning can be disabled by default, but I think it should be\n> there.\n> > > > This is a signal, so some in the database schema should be renamed.\n> > > Maybe -\n> > > > session_variables_ambiguity_warning.\n> > >\n> > > I agree that having a way to know that a variable has been bypassed\n> can be\n> > > useful.\n> > >\n> >\n> > done\n>\n> I've been thinking a bit more about the shadowing, and one scenario we\n> didn't\n> discuss is something like this naive example:\n>\n> CREATE TABLE tt(a text, b text);\n>\n> CREATE TYPE abc AS (a text, b text, c text);\n> CREATE VARIABLE tt AS abc;\n>\n> INSERT INTO tt SELECT 'a', 'b';\n> LET tt = ('x', 'y', 'z');\n>\n> SELECT tt.a, tt.b, tt.c FROM tt;\n>\n> Which, with the default configuration, currently returns\n>\n> a | b | c\n> ---+---+---\n> a | b | z\n> (1 row)\n>\n> I feel a bit uncomfortable that the system allows mixing variable\n> attributes\n> and relation columns for the same relation name. This is even worse here\n> as\n> part of the variable attributes are shadowed.\n>\n> It feels like a good way to write valid queries that clearly won't do what\n> you\n> think they do, a bit like the correlated sub-query trap, so maybe we should\n> have a way to prevent it.\n>\n> What do you think?\n>\n\nI thought about it before. I think valid RTE (but with the missing column)\ncan shadow the variable too.\n\nWith this change your query fails:\n\n(2022-08-23 11:05:55) postgres=# SELECT tt.a, tt.b, tt.c FROM tt;\nERROR: column tt.c does not exist\nLINE 1: SELECT tt.a, tt.b, tt.c FROM tt;\n ^\n(2022-08-23 11:06:03) postgres=# set session_variables_ambiguity_warning to\non;\nSET\n(2022-08-23 11:06:19) postgres=# SELECT tt.a, tt.b, tt.c FROM tt;\nWARNING: session variable \"tt.a\" is shadowed\nLINE 1: SELECT tt.a, tt.b, tt.c FROM tt;\n ^\nDETAIL: Session variables can be shadowed by columns, routine's variables\nand routine's arguments with the same name.\nWARNING: session variable \"tt.b\" is shadowed\nLINE 1: SELECT tt.a, tt.b, tt.c FROM tt;\n ^\nDETAIL: Session variables can be shadowed by columns, routine's variables\nand routine's arguments with the same name.\nWARNING: session variable \"public.tt\" is shadowed\nLINE 1: SELECT tt.a, tt.b, tt.c FROM tt;\n ^\nDETAIL: Session variables can be shadowed by tables or table's aliases\nwith the same name.\nERROR: column tt.c does not exist\nLINE 1: SELECT tt.a, tt.b, tt.c FROM tt;\n ^\nRegards\n\nPavel", "msg_date": "Tue, 23 Aug 2022 11:27:45 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "út 23. 8. 2022 v 3:57 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> On Mon, Aug 22, 2022 at 09:13:39PM +0200, Pavel Stehule wrote:\n> > po 22. 8. 2022 v 9:33 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n> napsal:\n> >\n> > >\n> > > - you define new AclMode READ and WRITE. Those bits are precious and I\n> > > don't\n> > > think it's ok to consume 2 bits for session variables, especially\n> since\n> > > those\n> > > are the last two bits available since the recent GUC access control\n> patch\n> > > (ACL_SET and ACL_ALTER_SYSTEM). Maybe we could existing INSERT and\n> > > UPDATE\n> > > privileges instead, like it's done for sequences?\n> > >\n> > >\n> > I have not a strong opinion about it. AclMode is uint32 - so I think\n> there\n> > are still 15bites reserved. I think so UPDATE and SELECT rights can work,\n> > but maybe it is better to use separate rights WRITE, READ to be stronger\n> > signalized so the variable is not the relation. On other hand large\n> objects\n> > use ACL_UPDATE, ACL_SELECT too, and it works. So I am neutral in this\n> > question. Has somebody here some opinion on this point? If not I'll\n> modify\n> > the patch like Julien proposes.\n>\n> Actually no, because AclMode is also used to store the grant option part.\n> The\n> comment before AclMode warns about it:\n>\n> * The present representation of AclItem limits us to 16 distinct rights,\n> * even though AclMode is defined as uint32. See utils/acl.h.\n>\n\nI missed this. I changed ACL to your proposal in today's patch\n\nThank you for your corrections.\n\nRegards\n\nPavel\n\nút 23. 8. 2022 v 3:57 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Mon, Aug 22, 2022 at 09:13:39PM +0200, Pavel Stehule wrote:\n> po 22. 8. 2022 v 9:33 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n>\n> >\n> > - you define new AclMode READ and WRITE.  Those bits are precious and I\n> > don't\n> >   think it's ok to consume 2 bits for session variables, especially since\n> > those\n> >   are the last two bits available since the recent GUC access control patch\n> >   (ACL_SET and ACL_ALTER_SYSTEM).  Maybe we could existing INSERT and\n> > UPDATE\n> >   privileges instead, like it's done for sequences?\n> >\n> >\n> I have not a strong opinion about it.  AclMode is uint32 - so I think there\n> are still 15bites reserved. I think so UPDATE and SELECT rights can work,\n> but maybe it is better to use separate rights WRITE, READ to be stronger\n> signalized so the variable is not the relation. On other hand large objects\n> use ACL_UPDATE, ACL_SELECT too, and it works. So I am neutral in this\n> question. Has somebody here some opinion on this point? If not I'll modify\n> the patch like Julien proposes.\n\nActually no, because AclMode is also used to store the grant option part.  The\ncomment before AclMode warns about it:\n\n * The present representation of AclItem limits us to 16 distinct rights,\n * even though AclMode is defined as uint32.  See utils/acl.h.I missed this. I changed ACL to your proposal in today's patchThank you for your corrections.RegardsPavel", "msg_date": "Tue, 23 Aug 2022 11:29:29 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Tue, Aug 23, 2022 at 11:27:45AM +0200, Pavel Stehule wrote:\n> �t 23. 8. 2022 v 7:56 odes�latel Julien Rouhaud <rjuju123@gmail.com> napsal:\n>\n> >\n> > I've been thinking a bit more about the shadowing, and one scenario we\n> > didn't\n> > discuss is something like this naive example:\n> >\n> > CREATE TABLE tt(a text, b text);\n> >\n> > CREATE TYPE abc AS (a text, b text, c text);\n> > CREATE VARIABLE tt AS abc;\n> >\n> > INSERT INTO tt SELECT 'a', 'b';\n> > LET tt = ('x', 'y', 'z');\n> >\n> > SELECT tt.a, tt.b, tt.c FROM tt;\n> >\n> > Which, with the default configuration, currently returns\n> >\n> > a | b | c\n> > ---+---+---\n> > a | b | z\n> > (1 row)\n> >\n> > I feel a bit uncomfortable that the system allows mixing variable\n> > attributes\n> > and relation columns for the same relation name. This is even worse here\n> > as\n> > part of the variable attributes are shadowed.\n> >\n> > It feels like a good way to write valid queries that clearly won't do what\n> > you\n> > think they do, a bit like the correlated sub-query trap, so maybe we should\n> > have a way to prevent it.\n> >\n> > What do you think?\n> >\n>\n> I thought about it before. I think valid RTE (but with the missing column)\n> can shadow the variable too.\n>\n> With this change your query fails:\n>\n> (2022-08-23 11:05:55) postgres=# SELECT tt.a, tt.b, tt.c FROM tt;\n> ERROR: column tt.c does not exist\n> LINE 1: SELECT tt.a, tt.b, tt.c FROM tt;\n> ^\n> (2022-08-23 11:06:03) postgres=# set session_variables_ambiguity_warning to\n> on;\n> SET\n> (2022-08-23 11:06:19) postgres=# SELECT tt.a, tt.b, tt.c FROM tt;\n> WARNING: session variable \"tt.a\" is shadowed\n> LINE 1: SELECT tt.a, tt.b, tt.c FROM tt;\n> ^\n> DETAIL: Session variables can be shadowed by columns, routine's variables\n> and routine's arguments with the same name.\n> WARNING: session variable \"tt.b\" is shadowed\n> LINE 1: SELECT tt.a, tt.b, tt.c FROM tt;\n> ^\n> DETAIL: Session variables can be shadowed by columns, routine's variables\n> and routine's arguments with the same name.\n> WARNING: session variable \"public.tt\" is shadowed\n> LINE 1: SELECT tt.a, tt.b, tt.c FROM tt;\n> ^\n> DETAIL: Session variables can be shadowed by tables or table's aliases\n> with the same name.\n> ERROR: column tt.c does not exist\n> LINE 1: SELECT tt.a, tt.b, tt.c FROM tt;\n\nGreat, thanks a lot!\n\nCould you add some regression tests for that scenario in the next version,\nsince this is handled by some new code? It will also probably be useful to\nremind any possible committer about that choice.\n\n\n", "msg_date": "Tue, 23 Aug 2022 20:57:07 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\n\npo 22. 8. 2022 v 9:33 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi Pavel,\n>\n> On Sun, Aug 21, 2022 at 09:54:03AM +0200, Pavel Stehule wrote:\n> >\n> > should be fixed now\n>\n> I started reviewing the patchset, beginning with 0001 (at least the parts\n> that\n> don't substantially change later) and have a few comments.\n>\n> - you define new AclMode READ and WRITE. Those bits are precious and I\n> don't\n> think it's ok to consume 2 bits for session variables, especially since\n> those\n> are the last two bits available since the recent GUC access control patch\n> (ACL_SET and ACL_ALTER_SYSTEM). Maybe we could existing INSERT and\n> UPDATE\n> privileges instead, like it's done for sequences?\n>\n\nchanged - now ACL_SELECT and ACL_UPDATE are used\n\n\n>\n> - make check and make-check-world don't pass with this test only. Given\n> that\n> the split is mostly done to ease review and probably not intended to be\n> committed this way, we probably shouldn't spend efforts to clean up the\n> split\n> apart from making sure that each patch compiles cleanly on its own. But\n> in\n> this case it brought my attention to misc_sanity.sql test. Looking at\n> patch\n> 0010, I see:\n>\n> diff --git a/src/test/regress/expected/misc_sanity.out\n> b/src/test/regress/expected/misc_sanity.out\n> index a57fd142a9..ce9bad7211 100644\n> --- a/src/test/regress/expected/misc_sanity.out\n> +++ b/src/test/regress/expected/misc_sanity.out\n> @@ -60,7 +60,9 @@ ORDER BY 1, 2;\n> pg_index | indpred | pg_node_tree\n> pg_largeobject | data | bytea\n> pg_largeobject_metadata | lomacl | aclitem[]\n> -(11 rows)\n> + pg_variable | varacl | aclitem[]\n> + pg_variable | vardefexpr | pg_node_tree\n> +(13 rows)\n>\n> This is the test for relations with varlena columns without TOAST table. I\n> don't think that's correct to add those exceptions, and there should be a\n> TOAST\n> table declared for pg_variable too, as noted in the comment above that\n> query.\n>\n> - nitpicking: s/catalogue/catalog/\n>\n> Some other comments on other patches while testing things around:\n>\n\nfixed\n\n\n>\n> - For sessionvariable.c (in 0002), I see that there are still all the\n> comments\n> and code about checking type validity based on a generation number and\n> other\n> heuristics. I still fail to understand why this is needed at all as the\n> stored datum should remain compatible as long as we prevent the few\n> incompatible DDL that are also prevented when there's a relation\n> dependency.\n> As an example, I try to quickly disable all that code with the following:\n>\n\nI am not able to test (in this situation) the situation where gennum is\nincreased, but I think it is possible, and there are few situations where\ndependency is not enough. But maybe my thoughts are too pessimistic, and\nthis aparate is not necessary.\n\n1. update of binary custom type - the dependency allows an extension\nupdate, and after update the binary format can be changed. Now I think this\npart is useless, because although the extension can be updated, the dll\ncannot be unloaded, so the loaded implementation of custom session type\nwill be the same until session end.\n\n2. altering composite type - the generation number reduces overhead with\nchecking compatibility of stored value and expected value. With gennum I\nneed to run compatibility checks just once per transaction. When the gennum\nis the same, I can return data without any conversion.\n\n3. I try to use gennum for detection of oid overflow. The value is stored\nin the session memory context in the hash table. The related memory can be\ncleaned at transaction end (when memory is deleted) and when I can read\nsystem catalog (transaction is not aborted). When a transaction is aborted,\nthen I cannot read the system catalog, and I have to postpone cleaning to\nthe next usage of the session variable. Theoretically, the session can be\ninactive for a longer time and the system catalog can be changed a lot (and\nthe oid counter can be restarted).\n\nI am checking:\n\n3.1 if variable with oid still exists\n\n3.2 if the variable has assigned type with same oid\n\n3.3. if type fingerprint is same - and I can expect so the type with same\noid is same type\n\n3.2 and 3.3 are safe guard for cases where oid is restarted, and I cannot\nbelieve the consistency of values stored in memory.\n\nThis is a very different situation than for example temporary tables. Every\ntemp table for every session has its own entry in the system catalog, so\nprotection based on dependency can work. But record of session variable is\nshared - It is protected inside transaction, but session variables are\nliving in session. Without transaction there is not any lock on the item in\npg_variable, so I can drop the session variable although the value is\nstored in session memory in some other session. After dropping the related\nplans are resetted, but the stored value itself stays in memory and can be\naccessed - if some future variable takes the same oid. With gennum I have\n3x checks - that should ensure that the returned value should be always\nbinary valid.\n\nNow, I am thinking about another, maybe more simple identity check, and it\nshould to work and it can less code than solution based on type's\nfingerprints\n\nI can introduce a 64bit sequence and I can store the value of seq in\npg_variable record. Then the identity check can be just savedoid = oid and\nsavedseqnum = seqnum\n\nWhat do you think about this idea? The overhead of that can be reduced,\nbecause for on transaction commit drop or on transaction end reset session\nvariables we don't need it.\n\n\n\n\n>\n> diff --git a/src/backend/commands/sessionvariable.c\n> b/src/backend/commands/sessionvariable.c\n> index 9b4f9482a4..7c8808dc46 100644\n> --- a/src/backend/commands/sessionvariable.c\n> +++ b/src/backend/commands/sessionvariable.c\n> @@ -794,6 +794,8 @@ svartype_verify_composite_fast(SVariableType svt)\n> static int64\n> get_svariable_valid_type_gennum(SVariableType svt)\n> {\n> + return 1;\n> +\n> HeapTuple tuple;\n> bool fast_check = true;\n>\n> @@ -905,6 +907,8 @@ get_svariabletype(Oid typid)\n> static bool\n> session_variable_use_valid_type(SVariable svar)\n> {\n> + return true;\n> +\n> Assert(svar);\n> Assert(svar->svartype);\n>\n> And session_variable.sql regression test still works just fine. Am I\n> missing\n> something?\n>\n\nthe regress test doesn't try to reset oid counter\n\n\n>\n> While at it, the initial comment should probably say \"free local memory\"\n> rather\n> than \"purge memory\".\n>\n\nchanged\n\n\n>\n> - doc are missing for GRANT/REVOKE ... ON ALL VARIABLES\n>\n\ndone\n\n\n>\n> - plpgsql.sgml:\n> + <sect3>\n> + <title><command>Session variables and constants</command></title>\n>\n>\nrewroted just to \"Session variables\"\n\n\n\n> I don't think it's ok to use \"constant\" as an alias for immutable session\n> variable as immutable session variable can actually be changed.\n>\n> Similarly, in catalogs.sgml:\n>\n> + <structfield>varisimmutable</structfield> <type>boolean</type>\n> + </para>\n> + <para>\n> + True if the variable is immutable (cannot be modified). The\n> default value is false.\n> + </para></entry>\n> + </row>\n>\n> I think there should be a note and a link to the corresponding part in\n> create_variable.sgml to explain what exactly is an immutable variable, as\n> the\n> implemented behavior (for nullable immutable variable) is somewhat\n> unexpected.\n>\n\ndone\n\n\n>\n> - other nitpicking: pg_variable and struct Variable seems a bit\n> inconsistent.\n> For instance one uses vartype and vartypmod and the other typid and\n> typmod,\n> while both use varname and varnamespace. I think we should avoid\n> discrepancy\n> here.\n>\n\nI did it because I needed to rename the namespace field, but the prefix var\nis not the best. I don't think so using same names like pg_variable in\nVariable is good idea (due fields like varisnotnull, varisimmutable), but I\ncan the rename varnane and varnamespace to name and namespaceid, what is\nbetter than varname, and varnamespace.\n\n\n> Also, there's a sessionvariable.c and a session_variable.h. Let's use\n> session_variable.[ch], as it seems more readable?\n>\n\nrenamed\n\n\n>\n> -typedef patch: missing SVariableTypeData, some commits need a pgindent,\n> e.g:\n>\n> +typedef SVariableTypeData * SVariableType;\n>\n> +typedef SVariableData * SVariable;\n>\n> +static SessionVariableValue * RestoreSessionVariables(char\n> **start_address,\n> + int\n> *num_session_variables);\n>\n> +static Query *transformLetStmt(ParseState *pstate,\n> + LetStmt * stmt);\n>\n> (and multiple others)\n>\n\nI fixed these.\n\nThank you for comments\n\nPavel", "msg_date": "Wed, 24 Aug 2022 08:37:09 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "út 23. 8. 2022 v 14:57 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> On Tue, Aug 23, 2022 at 11:27:45AM +0200, Pavel Stehule wrote:\n> > út 23. 8. 2022 v 7:56 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n> napsal:\n> >\n> > >\n> > > I've been thinking a bit more about the shadowing, and one scenario we\n> > > didn't\n> > > discuss is something like this naive example:\n> > >\n> > > CREATE TABLE tt(a text, b text);\n> > >\n> > > CREATE TYPE abc AS (a text, b text, c text);\n> > > CREATE VARIABLE tt AS abc;\n> > >\n> > > INSERT INTO tt SELECT 'a', 'b';\n> > > LET tt = ('x', 'y', 'z');\n> > >\n> > > SELECT tt.a, tt.b, tt.c FROM tt;\n> > >\n> > > Which, with the default configuration, currently returns\n> > >\n> > > a | b | c\n> > > ---+---+---\n> > > a | b | z\n> > > (1 row)\n> > >\n> > > I feel a bit uncomfortable that the system allows mixing variable\n> > > attributes\n> > > and relation columns for the same relation name. This is even worse\n> here\n> > > as\n> > > part of the variable attributes are shadowed.\n> > >\n> > > It feels like a good way to write valid queries that clearly won't do\n> what\n> > > you\n> > > think they do, a bit like the correlated sub-query trap, so maybe we\n> should\n> > > have a way to prevent it.\n> > >\n> > > What do you think?\n> > >\n> >\n> > I thought about it before. I think valid RTE (but with the missing\n> column)\n> > can shadow the variable too.\n> >\n> > With this change your query fails:\n> >\n> > (2022-08-23 11:05:55) postgres=# SELECT tt.a, tt.b, tt.c FROM tt;\n> > ERROR: column tt.c does not exist\n> > LINE 1: SELECT tt.a, tt.b, tt.c FROM tt;\n> > ^\n> > (2022-08-23 11:06:03) postgres=# set session_variables_ambiguity_warning\n> to\n> > on;\n> > SET\n> > (2022-08-23 11:06:19) postgres=# SELECT tt.a, tt.b, tt.c FROM tt;\n> > WARNING: session variable \"tt.a\" is shadowed\n> > LINE 1: SELECT tt.a, tt.b, tt.c FROM tt;\n> > ^\n> > DETAIL: Session variables can be shadowed by columns, routine's\n> variables\n> > and routine's arguments with the same name.\n> > WARNING: session variable \"tt.b\" is shadowed\n> > LINE 1: SELECT tt.a, tt.b, tt.c FROM tt;\n> > ^\n> > DETAIL: Session variables can be shadowed by columns, routine's\n> variables\n> > and routine's arguments with the same name.\n> > WARNING: session variable \"public.tt\" is shadowed\n> > LINE 1: SELECT tt.a, tt.b, tt.c FROM tt;\n> > ^\n> > DETAIL: Session variables can be shadowed by tables or table's aliases\n> > with the same name.\n> > ERROR: column tt.c does not exist\n> > LINE 1: SELECT tt.a, tt.b, tt.c FROM tt;\n>\n> Great, thanks a lot!\n>\n> Could you add some regression tests for that scenario in the next version,\n> since this is handled by some new code? It will also probably be useful to\n> remind any possible committer about that choice.\n>\n\nit is there\n\nRegards\n\nPavel\n\nút 23. 8. 2022 v 14:57 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Tue, Aug 23, 2022 at 11:27:45AM +0200, Pavel Stehule wrote:\n> út 23. 8. 2022 v 7:56 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n>\n> >\n> > I've been thinking a bit more about the shadowing, and one scenario we\n> > didn't\n> > discuss is something like this naive example:\n> >\n> > CREATE TABLE tt(a text, b text);\n> >\n> > CREATE TYPE abc AS (a text, b text, c text);\n> > CREATE VARIABLE tt AS abc;\n> >\n> > INSERT INTO tt SELECT 'a', 'b';\n> > LET tt = ('x', 'y', 'z');\n> >\n> > SELECT tt.a, tt.b, tt.c FROM tt;\n> >\n> > Which, with the default configuration, currently returns\n> >\n> >  a | b | c\n> > ---+---+---\n> >  a | b | z\n> > (1 row)\n> >\n> > I feel a bit uncomfortable that the system allows mixing variable\n> > attributes\n> > and relation columns for the same relation name.  This is even worse here\n> > as\n> > part of the variable attributes are shadowed.\n> >\n> > It feels like a good way to write valid queries that clearly won't do what\n> > you\n> > think they do, a bit like the correlated sub-query trap, so maybe we should\n> > have a way to prevent it.\n> >\n> > What do you think?\n> >\n>\n> I thought about it before. I think valid RTE (but with the missing column)\n> can shadow the variable too.\n>\n> With this change your query fails:\n>\n> (2022-08-23 11:05:55) postgres=# SELECT tt.a, tt.b, tt.c FROM tt;\n> ERROR:  column tt.c does not exist\n> LINE 1: SELECT tt.a, tt.b, tt.c FROM tt;\n>                            ^\n> (2022-08-23 11:06:03) postgres=# set session_variables_ambiguity_warning to\n> on;\n> SET\n> (2022-08-23 11:06:19) postgres=# SELECT tt.a, tt.b, tt.c FROM tt;\n> WARNING:  session variable \"tt.a\" is shadowed\n> LINE 1: SELECT tt.a, tt.b, tt.c FROM tt;\n>                ^\n> DETAIL:  Session variables can be shadowed by columns, routine's variables\n> and routine's arguments with the same name.\n> WARNING:  session variable \"tt.b\" is shadowed\n> LINE 1: SELECT tt.a, tt.b, tt.c FROM tt;\n>                      ^\n> DETAIL:  Session variables can be shadowed by columns, routine's variables\n> and routine's arguments with the same name.\n> WARNING:  session variable \"public.tt\" is shadowed\n> LINE 1: SELECT tt.a, tt.b, tt.c FROM tt;\n>                            ^\n> DETAIL:   Session variables can be shadowed by tables or table's aliases\n> with the same name.\n> ERROR:  column tt.c does not exist\n> LINE 1: SELECT tt.a, tt.b, tt.c FROM tt;\n\nGreat, thanks a lot!\n\nCould you add some regression tests for that scenario in the next version,\nsince this is handled by some new code?  It will also probably be useful to\nremind any possible committer about that choice.it is thereRegardsPavel", "msg_date": "Wed, 24 Aug 2022 08:42:07 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Op 24-08-2022 om 08:37 schreef Pavel Stehule:\n>>\n> \n> I fixed these.\n> \n\n > [v20220824-1-*.patch]\n\nHi Pavel,\n\nI noticed just now that variable assignment (i.e., LET) unexpectedly \n(for me anyway) cast the type of the input value. Surely that's wrong? \nThe documentation says clearly enough:\n\n'The result must be of the same data type as the session variable.'\n\n\nExample:\n\ncreate variable x integer;\nlet x=1.5;\nselect x, pg_typeof(x);\n x | pg_typeof\n---+-----------\n 2 | integer\n(1 row)\n\n\nIs this correct?\n\nIf such casts (there are several) are intended then the text of the \ndocumentation should be changed.\n\nThanks,\n\nErik\n\n\n\n", "msg_date": "Wed, 24 Aug 2022 10:04:45 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "st 24. 8. 2022 v 10:04 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n\n> Op 24-08-2022 om 08:37 schreef Pavel Stehule:\n> >>\n> >\n> > I fixed these.\n> >\n>\n> > [v20220824-1-*.patch]\n>\n> Hi Pavel,\n>\n> I noticed just now that variable assignment (i.e., LET) unexpectedly\n> (for me anyway) cast the type of the input value. Surely that's wrong?\n> The documentation says clearly enough:\n>\n> 'The result must be of the same data type as the session variable.'\n>\n>\n> Example:\n>\n> create variable x integer;\n> let x=1.5;\n> select x, pg_typeof(x);\n> x | pg_typeof\n> ---+-----------\n> 2 | integer\n> (1 row)\n>\n>\n> Is this correct?\n>\n> If such casts (there are several) are intended then the text of the\n> documentation should be changed.\n>\n\nyes - the behave is designed like plpgsql assignment or SQL assignment\n\n (2022-08-25 19:35:35) postgres=# do $$\npostgres$# declare i int;\npostgres$# begin\npostgres$# i := 1.5;\npostgres$# raise notice '%', i;\npostgres$# end;\npostgres$# $$;\nNOTICE: 2\nDO\n\n(2022-08-25 19:38:10) postgres=# create table foo1(a int);\nCREATE TABLE\n(2022-08-25 19:38:13) postgres=# insert into foo1 values(1.5);\nINSERT 0 1\n(2022-08-25 19:38:21) postgres=# select * from foo1;\n┌───┐\n│ a │\n╞═══╡\n│ 2 │\n└───┘\n(1 row)\n\nThere are the same rules as in SQL.\n\nThis sentence is not good - the value should be casteable to the target\ntype.\n\nRegards\n\nPavel\n\n\n\n\n\n> Thanks,\n>\n> Erik\n>\n>\n\nst 24. 8. 2022 v 10:04 odesílatel Erik Rijkers <er@xs4all.nl> napsal:Op 24-08-2022 om 08:37 schreef Pavel Stehule:\n>>\n> \n> I fixed these.\n> \n\n > [v20220824-1-*.patch]\n\nHi Pavel,\n\nI noticed just now that variable assignment (i.e., LET) unexpectedly \n(for me anyway) cast the type of the input value. Surely that's wrong? \nThe documentation says clearly enough:\n\n'The result must be of the same data type as the session variable.'\n\n\nExample:\n\ncreate variable x integer;\nlet x=1.5;\nselect x, pg_typeof(x);\n  x | pg_typeof\n---+-----------\n  2 | integer\n(1 row)\n\n\nIs this correct?\n\nIf such casts (there are several) are intended then the text of the \ndocumentation should be changed.yes - the behave is designed like plpgsql assignment or SQL assignment (2022-08-25 19:35:35) postgres=# do $$postgres$# declare i int;postgres$# beginpostgres$#   i := 1.5;postgres$#   raise notice '%', i;postgres$# end;postgres$# $$;NOTICE:  2DO(2022-08-25 19:38:10) postgres=# create table foo1(a int);CREATE TABLE(2022-08-25 19:38:13) postgres=# insert into foo1 values(1.5);INSERT 0 1(2022-08-25 19:38:21) postgres=# select * from foo1;┌───┐│ a │╞═══╡│ 2 │└───┘(1 row)There are the same rules as in SQL. This sentence is not good - the value should be casteable to the target type.RegardsPavel\n\nThanks,\n\nErik", "msg_date": "Thu, 25 Aug 2022 19:40:57 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\n\n> - For sessionvariable.c (in 0002), I see that there are still all the\n> comments\n> and code about checking type validity based on a generation number and\n> other\n> heuristics. I still fail to understand why this is needed at all as the\n> stored datum should remain compatible as long as we prevent the few\n> incompatible DDL that are also prevented when there's a relation\n> dependency.\n> As an example, I try to quickly disable all that code with the following:\n>\n>\n>\nI am sending an alternative implementation based on using own int8 sequence\nas protection against unwanted oid equation of different session's\nvariables.\n\nThis code is much shorter, and, I think better, but now, the creating\nsequence in bootstrap time is dirty. Maybe instead the sequence can be used\n64bite timestamp or some else - it needs a unique combination of oid, 8byte.\n\nRegards\n\nPavel", "msg_date": "Thu, 25 Aug 2022 19:49:38 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nafter some thinking I think that instead of sequence I can use LSN. The\ncombination oid, LSN should be unique forever\n\nRegards\n\nPavel", "msg_date": "Sat, 27 Aug 2022 13:17:45 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nOn Sat, Aug 27, 2022 at 01:17:45PM +0200, Pavel Stehule wrote:\n>\n> after some thinking I think that instead of sequence I can use LSN. The\n> combination oid, LSN should be unique forever\n\nYeah I was about suggesting doing that instead of a sequence, so +1 for that\napproach!\n\nI've been spending a bit of time trying to improve the test coverage on the\nprotection for concurrently deleted and recreated variables, and thought that a\nnew isolation test should be enough. I'm attaching a diff (in .txt extension)\nthat could be applied to 009-regress-tests-for-session-variables.patch, but\nwhile working on that I discovered a few problems.\n\nFirst, the pg_debug_show_used_session_variables() function reports what's\ncurrently locally known, but there's no guarantee that\nAcceptInvalidationMessages() will be called prior to its execution. For\ninstance if you're in a transaction and already hold a lock on the function and\nexecute it again.\n\nIt therefore means that it can display that a locally cached variable isn't\ndropped and still holds a value, while it's not the case. While it may be\nsurprising, I think that's still the wanted behavior as you want to know what\nis the cache state. FTR this is tested in the last permutation in the attached\npatch (although the expected output contains the up-to-date information, so you\ncan see the failure).\n\nBut if invalidation are processed when calling the function, the behavior seems\nsurprising as far as I can see the cleanup seems to be done in 2 steps: mark t\nhe hash entry as removed and then remove the hash entry. For instance:\n\n(conn 1) CREATE VARIABLE myvar AS text;\n(conn 1) LET myvar = 'something';\n(conn 2) DROP VARIABLE myvar;\n(conn 1) SELECT schema, name, removed FROM pg_debug_show_used_session_variables();\n schema | name | removed\n--------+-------+---------\n public | myvar | t\n(1 row)\n\n(conn 1) SELECT schema, name, removed FROM pg_debug_show_used_session_variables();\n schema | name | removed\n--------+------+---------\n(0 rows)\n\nWhy are two steps necessary here, and is that really wanted?\n\nFinally, I noticed that it's quite easy to get cache lookup failures when using\ntransactions. AFAICS it's because the current code first checks in the local\ncache (which often isn't immediately invalidated when in a transaction),\nreturns an oid (of an already dropped variable), then the code acquires a lock\non that non-existent variable, which internally accepts invalidation after the\nlock is acquired. The rest of the code can then fail with some \"cache lookup\nerror\" in the various functions as the invalidation has now been processed.\nThis is also tested in the attached isolation test.\n\nI think that using a retry approach based on SharedInvalidMessageCounter change\ndetection, like RangeVarGetRelidExtended(), in IdentifyVariable() should be\nenough to fix that class of problem, but maybe some other general functions\nwould need similar protection too.\n\nWhile looking at the testing, I also noticed that the main regression tests\ncomments are now outdated since the new (and more permissive) approach for\ndropped variable detection. For instance:\n\n+ ALTER TYPE public.svar_test_type DROP ATTRIBUTE c;\n+ -- should to fail\n+ SELECT public.svar;\n+ svar \n+ ---------\n+ (10,20)\n+ (1 row)\n+ \n+ ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int;\n+ -- should to fail too (different type, different generation number);\n+ SELECT public.svar;\n+ svar \n+ ----------\n+ (10,20,)\n+ (1 row)\n\nI'm also unsure if this one is showing a broken behavior or not:\n\n+ CREATE VARIABLE public.avar AS int;\n+ -- should to fail\n+ SELECT avar FROM xxtab;\n+ avar\n+ ------\n+ 10\n+ (1 row)\n+ \n+ -- should be ok\n+ SELECT public.avar FROM xxtab;\n+ avar\n+ ------\n+ \n+ (1 row)\n\n\nFor reference, with the code as-is I get the following diff when testing the\nattached isolation test:\n\n--- /Users/rjuju/git/postgresql/src/test/isolation/expected/session-variable.out\t2022-08-29 15:41:11.000000000 +0800\n+++ /Users/rjuju/git/pg/pgmaster_debug/src/test/isolation/output_iso/results/session-variable.out\t2022-08-29 15:42:17.000000000 +0800\n@@ -16,21 +16,21 @@\n step let: LET myvar = 'test';\n step val: SELECT myvar;\n myvar\n -----\n test\n (1 row)\n\n step s1: BEGIN;\n step drop: DROP VARIABLE myvar;\n step val: SELECT myvar;\n-ERROR: column or variable \"myvar\" does not exist\n+ERROR: cache lookup failed for session variable 16386\n step sr1: ROLLBACK;\n\n starting permutation: let val dbg drop create dbg val\n step let: LET myvar = 'test';\n step val: SELECT myvar;\n myvar\n -----\n test\n (1 row)\n\n@@ -68,20 +68,16 @@\n schema|name |removed\n ------+-----+-------\n public|myvar|f\n (1 row)\n\n step drop: DROP VARIABLE myvar;\n step create: CREATE VARIABLE myvar AS text\n step dbg: SELECT schema, name, removed FROM pg_debug_show_used_session_variables();\n schema|name |removed\n ------+-----+-------\n-public|myvar|t\n+public|myvar|f\n (1 row)\n\n step val: SELECT myvar;\n-myvar\n------\n-\n-(1 row)\n-\n+ERROR: cache lookup failed for session variable 16389\n step sr1: ROLLBACK;", "msg_date": "Mon, 29 Aug 2022 17:00:12 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "st 24. 8. 2022 v 10:04 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n\n> Op 24-08-2022 om 08:37 schreef Pavel Stehule:\n> >>\n> >\n> > I fixed these.\n> >\n>\n> > [v20220824-1-*.patch]\n>\n> Hi Pavel,\n>\n> I noticed just now that variable assignment (i.e., LET) unexpectedly\n> (for me anyway) cast the type of the input value. Surely that's wrong?\n> The documentation says clearly enough:\n>\n> 'The result must be of the same data type as the session variable.'\n>\n>\n> Example:\n>\n> create variable x integer;\n> let x=1.5;\n> select x, pg_typeof(x);\n> x | pg_typeof\n> ---+-----------\n> 2 | integer\n> (1 row)\n>\n>\n> Is this correct?\n>\n> If such casts (there are several) are intended then the text of the\n> documentation should be changed.\n>\n\n\nI changed this\n\n @@ -58,8 +58,9 @@ LET <replaceable\nclass=\"parameter\">session_variable</replaceable> = DEFAULT\n <term><literal>sql_expression</literal></term>\n <listitem>\n <para>\n- An SQL expression, in parentheses. The result must be of the same\ndata type as the session\n- variable.\n+ An SQL expression (can be subquery in parenthesis). The result must\n+ be of castable to the same data type as the session variable (in\n+ implicit or assignment context).\n </para>\n </listitem>\n </varlistentry>\n\nis it ok?\n\nRegards\n\nPavel\n\n\n> Thanks,\n>\n> Erik\n>\n>\n\nst 24. 8. 2022 v 10:04 odesílatel Erik Rijkers <er@xs4all.nl> napsal:Op 24-08-2022 om 08:37 schreef Pavel Stehule:\n>>\n> \n> I fixed these.\n> \n\n > [v20220824-1-*.patch]\n\nHi Pavel,\n\nI noticed just now that variable assignment (i.e., LET) unexpectedly \n(for me anyway) cast the type of the input value. Surely that's wrong? \nThe documentation says clearly enough:\n\n'The result must be of the same data type as the session variable.'\n\n\nExample:\n\ncreate variable x integer;\nlet x=1.5;\nselect x, pg_typeof(x);\n  x | pg_typeof\n---+-----------\n  2 | integer\n(1 row)\n\n\nIs this correct?\n\nIf such casts (there are several) are intended then the text of the \ndocumentation should be changed.I changed this @@ -58,8 +58,9 @@ LET <replaceable class=\"parameter\">session_variable</replaceable> = DEFAULT     <term><literal>sql_expression</literal></term>     <listitem>      <para>-      An SQL expression, in parentheses. The result must be of the same data type as the session-      variable.+      An SQL expression (can be subquery in parenthesis). The result must+      be of castable to the same data type as the session variable (in+      implicit or assignment context).      </para>     </listitem>    </varlistentry>is it ok?RegardsPavel\n\nThanks,\n\nErik", "msg_date": "Wed, 31 Aug 2022 06:23:38 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\npo 29. 8. 2022 v 11:00 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> Hi,\n>\n> On Sat, Aug 27, 2022 at 01:17:45PM +0200, Pavel Stehule wrote:\n> >\n> > after some thinking I think that instead of sequence I can use LSN. The\n> > combination oid, LSN should be unique forever\n>\n> Yeah I was about suggesting doing that instead of a sequence, so +1 for\n> that\n> approach!\n>\n> I've been spending a bit of time trying to improve the test coverage on the\n> protection for concurrently deleted and recreated variables, and thought\n> that a\n> new isolation test should be enough. I'm attaching a diff (in .txt\n> extension)\n> that could be applied to 009-regress-tests-for-session-variables.patch, but\n> while working on that I discovered a few problems.\n>\n> First, the pg_debug_show_used_session_variables() function reports what's\n> currently locally known, but there's no guarantee that\n> AcceptInvalidationMessages() will be called prior to its execution. For\n> instance if you're in a transaction and already hold a lock on the\n> function and\n> execute it again.\n>\n> It therefore means that it can display that a locally cached variable isn't\n> dropped and still holds a value, while it's not the case. While it may be\n> surprising, I think that's still the wanted behavior as you want to know\n> what\n> is the cache state. FTR this is tested in the last permutation in the\n> attached\n> patch (although the expected output contains the up-to-date information,\n> so you\n> can see the failure).\n>\n> But if invalidation are processed when calling the function, the behavior\n> seems\n> surprising as far as I can see the cleanup seems to be done in 2 steps:\n> mark t\n> he hash entry as removed and then remove the hash entry. For instance:\n>\n> (conn 1) CREATE VARIABLE myvar AS text;\n> (conn 1) LET myvar = 'something';\n> (conn 2) DROP VARIABLE myvar;\n> (conn 1) SELECT schema, name, removed FROM\n> pg_debug_show_used_session_variables();\n> schema | name | removed\n> --------+-------+---------\n> public | myvar | t\n> (1 row)\n>\n> (conn 1) SELECT schema, name, removed FROM\n> pg_debug_show_used_session_variables();\n> schema | name | removed\n> --------+------+---------\n> (0 rows)\n>\n> Why are two steps necessary here, and is that really wanted?\n>\n\nThe value is removed in the first command, but at the end of transaction.\npg_debug_show_used_session_variables is called before, and at this moment\nthe variable should be in memory.\n\nI enhanced pg_debug_show_used_session_variables about debug output for\nstart and end, and you can see it.\n\n(2022-08-30 19:38:49) postgres=# set client_min_messages to debug1;\nSET\n(2022-08-30 19:38:55) postgres=# CREATE VARIABLE myvar AS text;\nDEBUG: record for session variable \"myvar\" (oid:16390) was created in\npg_variable\nCREATE VARIABLE\n(2022-08-30 19:39:03) postgres=# LET myvar = 'something';\nDEBUG: session variable \"public.myvar\" (oid:16390) has new entry in memory\n(emitted by WRITE)\nDEBUG: session variable \"public.myvar\" (oid:16390) has new value\nLET\n(2022-08-30 19:39:11) postgres=# SELECT schema, name, removed FROM\npg_debug_show_used_session_variables();\nDEBUG: pg_variable_cache_callback 84 2941368844\nDEBUG: session variable \"public.myvar\" (oid:16390) should be rechecked\n(forced by sinval)\nDEBUG: pg_debug_show_used_session_variables start\nDEBUG: effective call of sync_sessionvars_all()\nDEBUG: pg_debug_show_used_session_variables end\nDEBUG: session variable \"public.myvar\" (oid:16390) is removing from memory\n┌────────┬───────┬─────────┐\n│ schema │ name │ removed │\n╞════════╪═══════╪═════════╡\n│ public │ myvar │ t │\n└────────┴───────┴─────────┘\n(1 row)\n\n(2022-08-30 19:39:32) postgres=# SELECT schema, name, removed FROM\npg_debug_show_used_session_variables();\nDEBUG: pg_debug_show_used_session_variables start\nDEBUG: pg_debug_show_used_session_variables end\n┌────────┬──────┬─────────┐\n│ schema │ name │ removed │\n╞════════╪══════╪═════════╡\n└────────┴──────┴─────────┘\n(0 rows)\n\nBut I missed call sync_sessionvars_all in the drop variable. If I execute\nthis routine there I can fix this behavior and the cleaning in\nsync_sessionvars_all can be more aggressive.\n\nAfter change\n\n(2022-08-31 06:25:54) postgres=# let x = 10;\nLET\n(2022-08-31 06:25:59) postgres=# SELECT schema, name, removed FROM\npg_debug_show_used_session_variables();\n┌────────┬──────┬─────────┐\n│ schema │ name │ removed │\n╞════════╪══════╪═════════╡\n│ public │ x │ f │\n└────────┴──────┴─────────┘\n(1 row)\n\n-- after drop in other session\n\n(2022-08-31 06:26:00) postgres=# SELECT schema, name, removed FROM\npg_debug_show_used_session_variables();\n┌────────┬──────┬─────────┐\n│ schema │ name │ removed │\n╞════════╪══════╪═════════╡\n└────────┴──────┴─────────┘\n(0 rows)\n\n\n\n\n\n\n>\n> Finally, I noticed that it's quite easy to get cache lookup failures when\n> using\n> transactions. AFAICS it's because the current code first checks in the\n> local\n> cache (which often isn't immediately invalidated when in a transaction),\n> returns an oid (of an already dropped variable), then the code acquires a\n> lock\n> on that non-existent variable, which internally accepts invalidation after\n> the\n> lock is acquired. The rest of the code can then fail with some \"cache\n> lookup\n> error\" in the various functions as the invalidation has now been processed.\n> This is also tested in the attached isolation test.\n>\n> I think that using a retry approach based on SharedInvalidMessageCounter\n> change\n> detection, like RangeVarGetRelidExtended(), in IdentifyVariable() should be\n> enough to fix that class of problem, but maybe some other general functions\n> would need similar protection too.\n>\n\nI did it, and with this change it passed the isolation test. Thank you for\nyour important help!\n\n\n\n>\n> While looking at the testing, I also noticed that the main regression tests\n> comments are now outdated since the new (and more permissive) approach for\n> dropped variable detection. For instance:\n>\n> + ALTER TYPE public.svar_test_type DROP ATTRIBUTE c;\n> + -- should to fail\n> + SELECT public.svar;\n> + svar\n> + ---------\n> + (10,20)\n> + (1 row)\n> +\n> + ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int;\n> + -- should to fail too (different type, different generation number);\n> + SELECT public.svar;\n> + svar\n> + ----------\n> + (10,20,)\n> + (1 row)\n>\n>\nthe comments are obsolete, fixed\n\n\n>\n> + CREATE VARIABLE public.avar AS int;\n> + -- should to fail\n> + SELECT avar FROM xxtab;\n> + avar\n> + ------\n> + 10\n> + (1 row)\n> +\n> + -- should be ok\n> + SELECT public.avar FROM xxtab;\n> + avar\n> + ------\n> +\n> + (1 row)\n>\n\nfixed\n\n\n>\n>\n> For reference, with the code as-is I get the following diff when testing\n> the\n> attached isolation test:\n>\n> ---\n> /Users/rjuju/git/postgresql/src/test/isolation/expected/session-variable.out\n> 2022-08-29 15:41:11.000000000 +0800\n> +++\n> /Users/rjuju/git/pg/pgmaster_debug/src/test/isolation/output_iso/results/session-variable.out\n> 2022-08-29 15:42:17.000000000 +0800\n> @@ -16,21 +16,21 @@\n> step let: LET myvar = 'test';\n> step val: SELECT myvar;\n> myvar\n> -----\n> test\n> (1 row)\n>\n> step s1: BEGIN;\n> step drop: DROP VARIABLE myvar;\n> step val: SELECT myvar;\n> -ERROR: column or variable \"myvar\" does not exist\n> +ERROR: cache lookup failed for session variable 16386\n> step sr1: ROLLBACK;\n>\n> starting permutation: let val dbg drop create dbg val\n> step let: LET myvar = 'test';\n> step val: SELECT myvar;\n> myvar\n> -----\n> test\n> (1 row)\n>\n> @@ -68,20 +68,16 @@\n> schema|name |removed\n> ------+-----+-------\n> public|myvar|f\n> (1 row)\n>\n> step drop: DROP VARIABLE myvar;\n> step create: CREATE VARIABLE myvar AS text\n> step dbg: SELECT schema, name, removed FROM\n> pg_debug_show_used_session_variables();\n> schema|name |removed\n> ------+-----+-------\n> -public|myvar|t\n> +public|myvar|f\n> (1 row)\n>\n> step val: SELECT myvar;\n> -myvar\n> ------\n> -\n> -(1 row)\n> -\n> +ERROR: cache lookup failed for session variable 16389\n> step sr1: ROLLBACK;\n>\n>\nattached updated patches\n\nRegards\n\nPavel", "msg_date": "Thu, 1 Sep 2022 20:17:31 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nyesterday, I did few stupid errors. Fixed now.\n\nrebased today\n\nRegards\n\nPavel", "msg_date": "Fri, 2 Sep 2022 07:42:00 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nOn Fri, Sep 02, 2022 at 07:42:00AM +0200, Pavel Stehule wrote:\n>\n> rebased today\n\nAfter some off-list discussion with Pavel, I'm sending some proposal patches\n(in .txt extension) to apply to the last patchset.\n\nTo sum up, when a session issues a DROP VARIABLE, the session will receive an\nsinval notification for its own drop and we don't want to process it\nimmediately as we need to hold the value in case the transaction is rollbacked.\nThe current patch avoided that by forcing a single processing of sinval per\ntransaction, and forcing it before dropping the variable. It works but it\nseems to me that postponing all but the first VARIABLEOID sinval to the next\ntransaction is a big hammer, and the sooner we can free some memory the better.\n\nFor an alternative approach the attached patch store the lxid in the SVariable\nitself when dropping a currently set variable, so we can process all sinval and\nsimply defer to the next transaction the memory cleanup of the variable(s) we\nknow we just dropped. What do you think of that approach?\n\nAs I was working on some changes I also made a pass on session_variable.c. I\ntried to improve a bit some comments, and also got rid of the \"first_time\"\nvariable. The name wasn't really great, and AFAICS it can be replaced by\ntesting whether the memory context has been created yet or not.\n\nBut once that done I noticed the get_rowtype_value() function. I don't think\nthis function is necessary as the core code already knows how to deal with\nstored datum when the underlying composite type was modified. I tried to\nbypass that function and always simply return the stored value and all the\ntests run fine. Is there really any cases when this extra code is needed?\n\nFTR I tried to do a bunch of additional testing using relation as base type for\nvariable, as you can do more with those than plain composite types, but it\nstill always works just fine.\n\nHowever, while doing so I noticed that find_composite_type_dependencies()\nfailed to properly handle dependencies on relation (plain tables, matviews and\npartitioned tables). I'm also adding 2 additional patches to fix this corner\ncase and add an additional regression test for the plain table case.", "msg_date": "Sat, 3 Sep 2022 23:00:51 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Sat, Sep 03, 2022 at 11:00:51PM +0800, Julien Rouhaud wrote:\n> Hi,\n>\n> On Fri, Sep 02, 2022 at 07:42:00AM +0200, Pavel Stehule wrote:\n> >\n> > rebased today\n>\n> After some off-list discussion with Pavel, I'm sending some proposal patches\n> (in .txt extension) to apply to the last patchset.\n>\n> To sum up, when a session issues a DROP VARIABLE, the session will receive an\n> sinval notification for its own drop and we don't want to process it\n> immediately as we need to hold the value in case the transaction is rollbacked.\n> The current patch avoided that by forcing a single processing of sinval per\n> transaction, and forcing it before dropping the variable. It works but it\n> seems to me that postponing all but the first VARIABLEOID sinval to the next\n> transaction is a big hammer, and the sooner we can free some memory the better.\n>\n> For an alternative approach the attached patch store the lxid in the SVariable\n> itself when dropping a currently set variable, so we can process all sinval and\n> simply defer to the next transaction the memory cleanup of the variable(s) we\n> know we just dropped. What do you think of that approach?\n>\n> As I was working on some changes I also made a pass on session_variable.c. I\n> tried to improve a bit some comments, and also got rid of the \"first_time\"\n> variable. The name wasn't really great, and AFAICS it can be replaced by\n> testing whether the memory context has been created yet or not.\n>\n> But once that done I noticed the get_rowtype_value() function. I don't think\n> this function is necessary as the core code already knows how to deal with\n> stored datum when the underlying composite type was modified. I tried to\n> bypass that function and always simply return the stored value and all the\n> tests run fine. Is there really any cases when this extra code is needed?\n>\n> FTR I tried to do a bunch of additional testing using relation as base type for\n> variable, as you can do more with those than plain composite types, but it\n> still always works just fine.\n>\n> However, while doing so I noticed that find_composite_type_dependencies()\n> failed to properly handle dependencies on relation (plain tables, matviews and\n> partitioned tables). I'm also adding 2 additional patches to fix this corner\n> case and add an additional regression test for the plain table case.\n\nI forgot to mention this chunk:\n\n+\t/*\n+\t * Although the value of domain type should be valid (it is\n+\t * checked when it is assigned to session variable), we have to\n+\t * check related constraints anytime. It can be more expensive\n+\t * than in PL/pgSQL. PL/pgSQL forces domain checks when value\n+\t * is assigned to the variable or when value is returned from\n+\t * function. Fortunately, domain types manage cache of constraints by\n+\t * self.\n+\t */\n+\tif (svar->is_domain)\n+\t{\n+\t\tMemoryContext oldcxt = CurrentMemoryContext;\n+\n+\t\t/*\n+\t\t * Store domain_check extra in CurTransactionContext. When we are\n+\t\t * in other transaction, the domain_check_extra cache is not valid.\n+\t\t */\n+\t\tif (svar->domain_check_extra_lxid != MyProc->lxid)\n+\t\t\tsvar->domain_check_extra = NULL;\n+\n+\t\tdomain_check(svar->value, svar->isnull,\n+\t\t\t\t\t svar->typid, &svar->domain_check_extra,\n+\t\t\t\t\t CurTransactionContext);\n+\n+\t\tsvar->domain_check_extra_lxid = MyProc->lxid;\n+\n+\t\tMemoryContextSwitchTo(oldcxt);\n+\t}\n\nI agree that storing the domain_check_extra in the transaction context sounds\nsensible, but the memory context handling is not quite right.\n\nLooking at domain_check, it doesn't change the current memory context, so as-is\nall the code related to oldcxt is unnecessary.\n\nSome other callers like expandedrecord.c do switch to a short lived context to\nlimit the lifetime of the possible leak by the expression evaluation, but I\ndon't think that's an option here.\n\n\n", "msg_date": "Sun, 4 Sep 2022 12:31:20 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "ne 4. 9. 2022 v 6:31 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> On Sat, Sep 03, 2022 at 11:00:51PM +0800, Julien Rouhaud wrote:\n> > Hi,\n> >\n> > On Fri, Sep 02, 2022 at 07:42:00AM +0200, Pavel Stehule wrote:\n> > >\n> > > rebased today\n> >\n> > After some off-list discussion with Pavel, I'm sending some proposal\n> patches\n> > (in .txt extension) to apply to the last patchset.\n> >\n> > To sum up, when a session issues a DROP VARIABLE, the session will\n> receive an\n> > sinval notification for its own drop and we don't want to process it\n> > immediately as we need to hold the value in case the transaction is\n> rollbacked.\n> > The current patch avoided that by forcing a single processing of sinval\n> per\n> > transaction, and forcing it before dropping the variable. It works but\n> it\n> > seems to me that postponing all but the first VARIABLEOID sinval to the\n> next\n> > transaction is a big hammer, and the sooner we can free some memory the\n> better.\n> >\n> > For an alternative approach the attached patch store the lxid in the\n> SVariable\n> > itself when dropping a currently set variable, so we can process all\n> sinval and\n> > simply defer to the next transaction the memory cleanup of the\n> variable(s) we\n> > know we just dropped. What do you think of that approach?\n> >\n> > As I was working on some changes I also made a pass on\n> session_variable.c. I\n> > tried to improve a bit some comments, and also got rid of the\n> \"first_time\"\n> > variable. The name wasn't really great, and AFAICS it can be replaced by\n> > testing whether the memory context has been created yet or not.\n> >\n> > But once that done I noticed the get_rowtype_value() function. I don't\n> think\n> > this function is necessary as the core code already knows how to deal\n> with\n> > stored datum when the underlying composite type was modified. I tried to\n> > bypass that function and always simply return the stored value and all\n> the\n> > tests run fine. Is there really any cases when this extra code is\n> needed?\n>\n\nYes, it can works because there is not visible difference between NULL and\ndropped columns, and real number of attributes is saved in HeapTupleHeader\n\nso I removed this function and related code\n\n\n\n> >\n> > FTR I tried to do a bunch of additional testing using relation as base\n> type for\n> > variable, as you can do more with those than plain composite types, but\n> it\n> > still always works just fine.\n> >\n> > However, while doing so I noticed that find_composite_type_dependencies()\n> > failed to properly handle dependencies on relation (plain tables,\n> matviews and\n> > partitioned tables). I'm also adding 2 additional patches to fix this\n> corner\n> > case and add an additional regression test for the plain table case.\n>\n> I forgot to mention this chunk:\n>\n> + /*\n> + * Although the value of domain type should be valid (it is\n> + * checked when it is assigned to session variable), we have to\n> + * check related constraints anytime. It can be more expensive\n> + * than in PL/pgSQL. PL/pgSQL forces domain checks when value\n> + * is assigned to the variable or when value is returned from\n> + * function. Fortunately, domain types manage cache of constraints\n> by\n> + * self.\n> + */\n> + if (svar->is_domain)\n> + {\n> + MemoryContext oldcxt = CurrentMemoryContext;\n> +\n> + /*\n> + * Store domain_check extra in CurTransactionContext. When\n> we are\n> + * in other transaction, the domain_check_extra cache is\n> not valid.\n> + */\n> + if (svar->domain_check_extra_lxid != MyProc->lxid)\n> + svar->domain_check_extra = NULL;\n> +\n> + domain_check(svar->value, svar->isnull,\n> + svar->typid,\n> &svar->domain_check_extra,\n> + CurTransactionContext);\n> +\n> + svar->domain_check_extra_lxid = MyProc->lxid;\n> +\n> + MemoryContextSwitchTo(oldcxt);\n> + }\n>\n> I agree that storing the domain_check_extra in the transaction context\n> sounds\n> sensible, but the memory context handling is not quite right.\n>\n> Looking at domain_check, it doesn't change the current memory context, so\n> as-is\n> all the code related to oldcxt is unnecessary.\n>\n\nremoved\n\n\n>\n> Some other callers like expandedrecord.c do switch to a short lived\n> context to\n> limit the lifetime of the possible leak by the expression evaluation, but I\n> don't think that's an option here.\n>\n\nmerged your patches, big thanks\n\nRegards\n\nPavel", "msg_date": "Sun, 4 Sep 2022 21:27:02 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Tue, Sep 06, 2022 at 08:43:59AM +0200, Pavel Stehule wrote:\n> Hi\n>\n> After talking with Julian I removed \"debug\" fields name and nsname from\n> SVariable structure. When it is possible it is better to read these fields\n> from catalog without risk of obsoletely or necessity to refresh these\n> fields. In other cases we display only oid of variable instead name and\n> nsname (It is used just for debug purposes).\n\nThanks! I'm just adding back the forgotten Cc list.", "msg_date": "Tue, 6 Sep 2022 18:23:12 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nOn Tue, Sep 06, 2022 at 06:23:12PM +0800, Julien Rouhaud wrote:\n> On Tue, Sep 06, 2022 at 08:43:59AM +0200, Pavel Stehule wrote:\n> > Hi\n> >\n> > After talking with Julian I removed \"debug\" fields name and nsname from\n> > SVariable structure. When it is possible it is better to read these fields\n> > from catalog without risk of obsoletely or necessity to refresh these\n> > fields. In other cases we display only oid of variable instead name and\n> > nsname (It is used just for debug purposes).\n>\n> Thanks! I'm just adding back the forgotten Cc list.\n\nAbout the last change:\n\n+static void\n+pg_variable_cache_callback(Datum arg, int cacheid, uint32 hashvalue)\n+{\n[...]\n+ elog(DEBUG1, \"session variable \\\"%s.%s\\\" (oid:%u) should be rechecked (forced by sinval)\",\n+ get_namespace_name(get_session_variable_namespace(svar->varid)),\n+ get_session_variable_name(svar->varid),\n+ svar->varid);\n\nThere's no guarantee that the variable still exists in cache (for variables\ndropped in the current transaction), or even that the callback is called while\nin a transaction state, so we should only display the oid here.\n\nFTR just to be sure I ran all the new regression tests (with this fix) with CCA\nand log_min_messages = DEBUG1 and it didn't hit any problem, so it doesn't seem\nthat there's any other issue hidden somewhere.\n\n\nOther than that I don't see any remaining problems in session_variable.c. I\nstill have a few nitpicking comments though:\n\n+static SVariable\n+prepare_variable_for_reading(Oid varid)\n+{\n[...]\n+\t\t\t/* Store result before releasing Executor memory */\n+\t\t\tset_session_variable(svar, value, isnull, true);\n+\n+\t\t\tMemoryContextSwitchTo(oldcxt);\n+\n+\t\t\tFreeExecutorState(estate);\n\nThe comment and code is a bit misleading, as it's not immediately obvious that\nset_session_variable() doesn't rely on the current memory contex for\nallocations. Simply moving the MemoryContextSwitchTo() before the\nset_session_variable() would be better.\n\n+typedef struct SVariableData\n+{\n[...]\n+\tbool\t\tis_domain;\n+\tOid\t\t\tbasetypeid;\n+\tvoid\t *domain_check_extra;\n+\tLocalTransactionId domain_check_extra_lxid;\n\nAFAICS basetypeid isn't needed anymore.\n\n\n+ /* Both lists hold fields of SVariableXActActionItem type */\n+ static List *xact_on_commit_drop_actions = NIL;\n+ static List *xact_on_commit_reset_actions = NIL;\n\nIs it possible to merge both in a single list? I don't think that there's much\nto gain trying to separate those. They shouldn't contain a lot of entries, and\nthey're usually scanned at the same time anyway.\n\nThis is especially important as one of the tricky parts of this patchset is\nmaintaining those lists across subtransactions, and since both have the same\nheuristics all the related code is duplicated.\n\nI see that in AtPreEOXact_SessionVariable_on_xact_actions() both lists are\nhandled interleaved with the xact_recheck_varids, but I don't see any reason\nwhy we couldn't process both action lists first and then process the rechecks.\nI did a quick test and don't see any failure in the regression tests.\n\n\n+void\n+RemoveSessionVariable(Oid varid)\n+{\n\nI looks like a layering violation to have (part of) the code for CREATE\nVARIABLE in pg_variable.[ch] and the code for DROP VARIABLE in\nsession_variable.[ch].\n\nI think it was done mostly because it was the initial sync_sessionvars_all()\nthat was responsible to avoid cleaning up memory for variables dropped in the\ncurrent transaction, but that's not a requirement anymore. So I don't see\nanything preventing us from moving RemoveSessionVariable() in pg_variable, and\nexport some function in session_variable to do the additional work for properly\nmaintaining the hash table if needed (with that knowledge held in\nsession_variable, not in pg_variable). You should only need to pass the oid of\nthe variable and the eoxaction.\n\nSimlarly, why not move DefineSessionVariable() in pg_variable and expose some\nAPI in session_variable to register the needed SVAR_ON_COMMIT_DROP action?\n\nAlso, while not a problem I don't think that the CommandCounterIncrement() is\nnecessary in DefineSessionVariable(). CREATE VARIABLE is a single operation\nand you can't have anything else running in the same ProcessUtility() call.\nAnd since cd3e27464cc you have the guarantee that a CommandCounterIncrement()\nwill happen at the end of the utility command processing.\n\nWhile at it, maybe it would be good to add some extra tests in\nsrc/test/modules/test_extensions. I'm thinking a version 1.0 that creates a\nvariable and initialize the value (and and extra step after creating the\nextension to make sure that the value is really set), and an upgrade to 2.0\nthat creates a temp variable on commit drop, that has to fail due to the\ndependecy on the extension.\n\n\n", "msg_date": "Thu, 8 Sep 2022 15:18:43 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "čt 8. 9. 2022 v 9:18 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi,\n>\n> On Tue, Sep 06, 2022 at 06:23:12PM +0800, Julien Rouhaud wrote:\n> > On Tue, Sep 06, 2022 at 08:43:59AM +0200, Pavel Stehule wrote:\n> > > Hi\n> > >\n> > > After talking with Julian I removed \"debug\" fields name and nsname from\n> > > SVariable structure. When it is possible it is better to read these\n> fields\n> > > from catalog without risk of obsoletely or necessity to refresh these\n> > > fields. In other cases we display only oid of variable instead name and\n> > > nsname (It is used just for debug purposes).\n> >\n> > Thanks! I'm just adding back the forgotten Cc list.\n>\n> About the last change:\n>\n> +static void\n> +pg_variable_cache_callback(Datum arg, int cacheid, uint32 hashvalue)\n> +{\n> [...]\n> + elog(DEBUG1, \"session variable \\\"%s.%s\\\" (oid:%u) should be\n> rechecked (forced by sinval)\",\n> +\n> get_namespace_name(get_session_variable_namespace(svar->varid)),\n> + get_session_variable_name(svar->varid),\n> + svar->varid);\n>\n>\nfixed\n\n\n> There's no guarantee that the variable still exists in cache (for variables\n> dropped in the current transaction), or even that the callback is called\n> while\n> in a transaction state, so we should only display the oid here.\n>\n> FTR just to be sure I ran all the new regression tests (with this fix)\n> with CCA\n> and log_min_messages = DEBUG1 and it didn't hit any problem, so it doesn't\n> seem\n> that there's any other issue hidden somewhere.\n>\n>\n> Other than that I don't see any remaining problems in session_variable.c.\n> I\n> still have a few nitpicking comments though:\n>\n> +static SVariable\n> +prepare_variable_for_reading(Oid varid)\n> +{\n> [...]\n> + /* Store result before releasing Executor memory */\n> + set_session_variable(svar, value, isnull, true);\n> +\n> + MemoryContextSwitchTo(oldcxt);\n> +\n> + FreeExecutorState(estate);\n>\n> The comment and code is a bit misleading, as it's not immediately obvious\n> that\n> set_session_variable() doesn't rely on the current memory contex for\n> allocations. Simply moving the MemoryContextSwitchTo() before the\n> set_session_variable() would be better.\n>\n\nchanged\n\n\n>\n> +typedef struct SVariableData\n> +{\n> [...]\n> + bool is_domain;\n> + Oid basetypeid;\n> + void *domain_check_extra;\n> + LocalTransactionId domain_check_extra_lxid;\n>\n> AFAICS basetypeid isn't needed anymore.\n>\n>\nremoved\n\n\n>\n> + /* Both lists hold fields of SVariableXActActionItem type */\n> + static List *xact_on_commit_drop_actions = NIL;\n> + static List *xact_on_commit_reset_actions = NIL;\n>\n> Is it possible to merge both in a single list? I don't think that there's\n> much\n> to gain trying to separate those. They shouldn't contain a lot of\n> entries, and\n> they're usually scanned at the same time anyway.\n>\n> This is especially important as one of the tricky parts of this patchset is\n> maintaining those lists across subtransactions, and since both have the\n> same\n> heuristics all the related code is duplicated.\n>\n> I see that in AtPreEOXact_SessionVariable_on_xact_actions() both lists are\n> handled interleaved with the xact_recheck_varids, but I don't see any\n> reason\n> why we couldn't process both action lists first and then process the\n> rechecks.\n> I did a quick test and don't see any failure in the regression tests.\n>\n\nOriginally it was not possible, because there was no xact_reset_varids\nlist, and without this list the processing\nON_COMMIT_DROP started DROP VARIABLE command, and there was a request for\nON_COMMIT_RESET action.\nNow, it is possible, because in RemoveSessionVariable is conditional\nexecution:\n\n<--><--><-->if (!svar->eox_reset)\n<--><--><--><-->register_session_variable_xact_action(varid,\n<--><--><--><--><--><--><--><--><--><--><--><--><--> SVAR_ON_COMMIT_RESET);\n<--><-->}\n\nSo when we process ON_COMMIT_DROP actions, we know that the reset will not\nbe processed by ON_COMMIT_RESET action,\nand then these lists can be merged.\n\nso I merged these two lists to one\n\n\n\n>\n>\n> +void\n> +RemoveSessionVariable(Oid varid)\n> +{\n>\n> I looks like a layering violation to have (part of) the code for CREATE\n> VARIABLE in pg_variable.[ch] and the code for DROP VARIABLE in\n> session_variable.[ch].\n>\n> I think it was done mostly because it was the initial\n> sync_sessionvars_all()\n> that was responsible to avoid cleaning up memory for variables dropped in\n> the\n> current transaction, but that's not a requirement anymore. So I don't see\n> anything preventing us from moving RemoveSessionVariable() in pg_variable,\n> and\n> export some function in session_variable to do the additional work for\n> properly\n> maintaining the hash table if needed (with that knowledge held in\n> session_variable, not in pg_variable). You should only need to pass the\n> oid of\n> the variable and the eoxaction.\n>\n\nI am not sure if the proposed change helps. With it I need to break\nencapsulation. Now, all implementation details are hidden in\nsession_variable.c.\n\nI understand that the operation Define and Remove are different from\noperations Set and Get, but all are commands, and all need access to\nsessionvars and some lists.\n\n\n>\n> Simlarly, why not move DefineSessionVariable() in pg_variable and expose\n> some\n> API in session_variable to register the needed SVAR_ON_COMMIT_DROP action?\n>\n> Also, while not a problem I don't think that the CommandCounterIncrement()\n> is\n> necessary in DefineSessionVariable(). CREATE VARIABLE is a single\n> operation\n> and you can't have anything else running in the same ProcessUtility() call.\n> And since cd3e27464cc you have the guarantee that a\n> CommandCounterIncrement()\n> will happen at the end of the utility command processing.\n>\n\nremoved\n\n\n>\n> While at it, maybe it would be good to add some extra tests in\n> src/test/modules/test_extensions. I'm thinking a version 1.0 that creates\n> a\n> variable and initialize the value (and and extra step after creating the\n> extension to make sure that the value is really set), and an upgrade to 2.0\n> that creates a temp variable on commit drop, that has to fail due to the\n> dependecy on the extension.\n>\n\nIn updated patches I replaced used cacheMemoryContext by\nTopTransactionContext what is more correct (I think)\n\nRegards\n\nPavel", "msg_date": "Sat, 10 Sep 2022 22:12:03 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nchanges:\n\n- some minor cleaning\n- refactoring of RemoveSessionVariable - move part of code to pg_variable.c\n\nRegards\n\nPavel", "msg_date": "Sun, 11 Sep 2022 21:29:49 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nOn Sun, Sep 11, 2022 at 09:29:49PM +0200, Pavel Stehule wrote:\n>>\n>> Originally it was not possible, because there was no xact_reset_varids list, and without this list the processing\n>> ON_COMMIT_DROP started DROP VARIABLE command, and there was a request for ON_COMMIT_RESET action.\n>> Now, it is possible, because in RemoveSessionVariable is conditional execution:\n>> \n>> <--><--><-->if (!svar->eox_reset)\n>> <--><--><--><-->register_session_variable_xact_action(varid,\n>> <--><--><--><--><--><--><--><--><--><--><--><--><--> SVAR_ON_COMMIT_RESET);\n>> <--><-->}\n>> \n>> So when we process ON_COMMIT_DROP actions, we know that the reset will not be processed by ON_COMMIT_RESET action,\n>> and then these lists can be merged.\n>> \n>> so I merged these two lists to one\n\nThanks! This really helps with code readability, and after looking at it I\nfound some issues (see below).\n>\n> changes:\n>\n> - some minor cleaning\n> - refactoring of RemoveSessionVariable - move part of code to pg_variable.c\n\nThanks. I think we could still do more to split what code belongs to\npg_variable.c and session_variable.c. In my opinion, the various DDL code\nshould only invoke functions in pg_variable.c, which themselves can call\nfunction in session_variable.c if needed, and session_variable shouldn't know\nabout CreateSessionVarStmt (which should probably be rename\nCreateVariableStmt?) or VariableRelationId. After an off-list bikeshedding\nsession with Pavel, we came up with SessionVariableCreatePostprocess() and\nSessionVariableDropPostprocess() for the functions in session_variable.c called\nby pg_variable.c when handling CREATE VARIABLE and DROP VARIABLE commands.\n\nI'm attaching a new patchset with this change and some more (see below). I'm\nnot sending .txt files as this is rebased on top on the recent GUC refactoring\npatch. It won't change the cfbot outcome though, as I also add new regression\ntests that are for now failing (see below). I tried to keep the changes in\nextra \"FIXUP\" patches when possible, but the API changes in the first patch\ncause conflicts in the next one, so the big session variable patch has to\ncontain the needed changes.\n\nIn this patchset, I also changed the following:\n\n- global pass on the comments in session_variable\n- removed now useless sessionvars_types\n- added missing prototypes for static functions (for consistency), and moved\n all the static functions before the static function\n- minor other nitpicking / stylistic changes\n\nHere are the problems I found:\n\n- IdentifyVariable()\n\n\t\t/*\n\t\t * Lock relation. This will also accept any pending invalidation\n\t\t * messages. If we got back InvalidOid, indicating not found, then\n\t\t * there's nothing to lock, but we accept invalidation messages\n\t\t * anyway, to flush any negative catcache entries that may be\n\t\t * lingering.\n\t\t */\n+\t\tif (!OidIsValid(varid))\n+\t\t\tAcceptInvalidationMessages();\n+\t\telse if (OidIsValid(varid))\n+\t\t\tLockDatabaseObject(VariableRelationId, varid, 0, AccessShareLock);\n+\n+\t\tif (inval_count == SharedInvalidMessageCounter)\n+\t\t\tbreak;\n+\n+\t\tretry = true;\n+\t\told_varid = varid;\n+\t}\n\nAFAICS it's correct, but just to be extra cautious I'd explicitly set varid to\nInvalidOid before looping, so you restart in the same condition as the first\niteration (since varid is initialize when declared). Also, the comments should\nbe modified, it's \"Lock variable\", not \"Lock relation\", same for the comment in\nthe previous chunk (\"we've locked the relation that used to have this\nname...\").\n\n+Datum\n+pg_debug_show_used_session_variables(PG_FUNCTION_ARGS)\n+{\n+[...]\n+\t\t\telse\n+\t\t\t{\n+\t\t\t\t/*\n+\t\t\t\t * When session variable was removed from catalog, but still\n+\t\t\t\t * it in memory. The memory was not purged yet.\n+\t\t\t\t */\n+\t\t\t\tnulls[1] = true;\n+\t\t\t\tnulls[2] = true;\n+\t\t\t\tnulls[4] = true;\n+\t\t\t\tvalues[5] = BoolGetDatum(true);\n+\t\t\t\tnulls[6] = true;\n+\t\t\t\tnulls[7] = true;\n+\t\t\t\tnulls[8] = true;\n+\t\t\t}\n\nI'm wondering if we could try to improve things a bit here. Maybe display the\nvariable oid instead of its name as we still have that information, the type\n(using FORMAT_TYPE_ALLOW_INVALID as there's no guarantee that the type would\nstill exist without the dependency) and whether the variable is valid (at least\nper its stored value). We can keep NULL for the privileges, as there's no API\navoid erroring if the role has been dropped.\n\n+{ oid => '8488', descr => 'debug list of used session variables',\n+ proname => 'pg_debug_show_used_session_variables', prorows => '1000', proretset => 't',\n+ provolatile => 's', prorettype => 'record', proargtypes => '',\n+ proallargtypes => '{oid,text,text,oid,text,bool,bool,bool,bool}',\n+ proargmodes => '{o,o,o,o,o,o,o,o,o}',\n+ proargnames => '{varid,schema,name,typid,typname,removed,has_value,can_read,can_write}',\n\nSince we change READ / WRITE acl for SELECT / UPDATE, we should rename the\ncolumn can_select and can_update.\n\n+static void\n+pg_variable_cache_callback(Datum arg, int cacheid, uint32 hashvalue)\n+{\n+ [...]\n+\twhile ((svar = (SVariable) hash_seq_search(&status)) != NULL)\n+\t{\n+\t\tif (hashvalue == 0 || svar->hashvalue == hashvalue)\n+\t\t{\n+ [...]\n+\t\t\txact_recheck_varids = list_append_unique_oid(xact_recheck_varids,\n+\t\t\t\t\t\t\t\t\t\t\t\t\t\t svar->varid);\n\nThis has a pretty terrible complexity. It can degenerate badly, and there\nisn't any CHECK_FOR_INTERRUPTS so you could easily lock a backend for quite\nsome time.\n\nI think we should just keep appending oids, and do a list_sort(list,\nlist_oid_cmp) and list_deduplicate_oid(list) before processing the list, in\nsync_sessionvars_all() and AtPreEOXact_SessionVariable_on_xact_actions().\n\nMaybe while at it we could reuse sync_sessionvars_all in\nAtPreEOXact_SessionVariable_on_xact_actions (with a way to ask\nfor the lxid check or not), rather than duplicating the whole logic twice?\n\n+/*\n+ * Fast drop of the complete content of all session variables hash table.\n+ * This is code for DISCARD VARIABLES command. This command\n+ * cannot be run inside transaction, so we don't need to handle\n+ * end of transaction actions.\n+ */\n+void\n+ResetSessionVariables(void)\n+{\n+\t/* Destroy hash table and reset related memory context */\n+\tif (sessionvars)\n+\t{\n+\t\thash_destroy(sessionvars);\n+\t\tsessionvars = NULL;\n+\n+\t\thash_destroy(sessionvars_types);\n+\t\tsessionvars_types = NULL;\n+\t}\n+\n+\t/* Release memory allocated by session variables */\n+\tif (SVariableMemoryContext != NULL)\n+\t\tMemoryContextReset(SVariableMemoryContext);\n+\n+\t/*\n+\t * There are not any session variables left, so simply trim xact\n+\t * action list, and other lists.\n+\t */\n+\tlist_free_deep(xact_on_commit_actions);\n+\txact_on_commit_actions = NIL;\n+\n+\t/* We should clean xact_reset_varids */\n+\tlist_free(xact_reset_varids);\n+\txact_reset_varids = NIL;\n+\n+\t/* we should clean xact_recheck_varids */\n+\tlist_free(xact_recheck_varids);\n+\txact_recheck_varids = NIL;\n+}\n\nThe initial comment is wrong. This function is used for both DISCARD VARIABLES\nand DISCARD ALL, but only DISCARD ALL isn't allowed in a transaction (I fixed\nthe comment in the attached patchset).\nWe should allow DISCARD VARIABLES in a transaction, therefore it needs some\nmore thinking on which list can be freed, and in which context it should hold\nits data. AFAICS the only problematic case is ON COMMIT DROP, but an extra\ncheck wouldn't hurt. For instance:\n\nrjuju=# BEGIN;\nBEGIN\n\nrjuju=# CREATE TEMP VARIABLE v AS int ON COMMIT DROP;\nCREATE VARIABLE\n\nrjuju=# DISCARD VARIABLES ;\nDISCARD VARIABLES\n\nrjuju=# COMMIT;\nCOMMIT\n\nrjuju=# \\dV\n List of variables\n Schema | Name | Type | Collation | Nullable | Mutable | Default | Owner | Transactional end action\n-----------+------+---------+-----------+----------+---------+---------+-------+--------------------------\n pg_temp_3 | v | integer | | t | t | | rjuju | ON COMMIT DROP\n(1 row)\n\nNote that I still think that keeping a single List for both SVariableXActAction\nhelps for readability, even if it means cherry-picking which items should be\nremoved on DISCARD VARIABLES (which shouldn't be a very frequent operation\nanyway).\n\nAlso, xact_recheck_varids is allocated in SVariableMemoryContext, so DISCARD\nVARIABLE will crash if there's any pending recheck action.\n\nThere's only one regression test for DISCARD VARIABLE, which clearly wasn't\nenough. There should be one for the ON COMMIT DROP (which can be added in\nnormal regression test), one one with all action list populated (that need to\nbe in isolation tester). Both are added in the patchset in a suggestion patch,\nand for now the first test fails and the second crashes.\n\n\n- set_session_variable() is documented to either succeed or not change the\n currently set value. While it's globally true, I see 2 things that could be\n problematic:\n\n - free_session_variable_value() could technically fail. However, I don't see\n how it could be happening unless there's a memory corruption, so this would\n result in either an abort, or a backend in a very bad state. Anyway, since\n pfree() can clearly ereport(ERROR) we should probably do something about\n it. That being said, I don't really see the point of trying to preserve a\n value that looks like random pointer, which will probably cause a segfault\n the next time it's used. Maybe add a PG_TRY block around the call and mark\n it as invalid (and set freeval to false) if that happens?\n\n - the final elog(DEBUG1) can also fail. It also seems highly unlikely, so\n maybe accept that this exception is ok? For now I'm adding such a comment\n in a suggestion patch.\n\n- prepare_variable_for_reading() and SetSessionVariable():\n\n+\t/* Ensure so all entries in sessionvars hash table are valid */\n+\tsync_sessionvars_all();\n+\n+\t/* Protect used session variable against drop until transaction end */\n+\tLockDatabaseObject(VariableRelationId, varid, 0, AccessShareLock);\n\nIt's possible that a session variable is dropped after calling\nsync_sessionvars_all(), and we would receive the sinval when acquiring the lock\non VariableRelationId but not process it until the next sync_sessionvars_all\ncall. I think we should acquire the lock first and then call\nsync_sessionvars_all. I did that in the suggestion patch.", "msg_date": "Fri, 16 Sep 2022 11:59:04 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Fri, Sep 16, 2022 at 11:59:04AM +0800, Julien Rouhaud wrote:\n> Hi,\n> \n> On Sun, Sep 11, 2022 at 09:29:49PM +0200, Pavel Stehule wrote:\n> >>\n> >> Originally it was not possible, because there was no xact_reset_varids list, and without this list the processing\n> >> ON_COMMIT_DROP started DROP VARIABLE command, and there was a request for ON_COMMIT_RESET action.\n> >> Now, it is possible, because in RemoveSessionVariable is conditional execution:\n> >> \n> >> <--><--><-->if (!svar->eox_reset)\n> >> <--><--><--><-->register_session_variable_xact_action(varid,\n> >> <--><--><--><--><--><--><--><--><--><--><--><--><--> SVAR_ON_COMMIT_RESET);\n> >> <--><-->}\n> >> \n> >> So when we process ON_COMMIT_DROP actions, we know that the reset will not be processed by ON_COMMIT_RESET action,\n> >> and then these lists can be merged.\n> >> \n> >> so I merged these two lists to one\n> \n> Thanks! This really helps with code readability, and after looking at it I\n> found some issues (see below).\n> >\n> > changes:\n> >\n> > - some minor cleaning\n> > - refactoring of RemoveSessionVariable - move part of code to pg_variable.c\n> \n> Thanks. I think we could still do more to split what code belongs to\n> pg_variable.c and session_variable.c. In my opinion, the various DDL code\n> should only invoke functions in pg_variable.c, which themselves can call\n> function in session_variable.c if needed, and session_variable shouldn't know\n> about CreateSessionVarStmt (which should probably be rename\n> CreateVariableStmt?) or VariableRelationId. After an off-list bikeshedding\n> session with Pavel, we came up with SessionVariableCreatePostprocess() and\n> SessionVariableDropPostprocess() for the functions in session_variable.c called\n> by pg_variable.c when handling CREATE VARIABLE and DROP VARIABLE commands.\n> \n> I'm attaching a new patchset with this change and some more (see below). I'm\n> not sending .txt files as this is rebased on top on the recent GUC refactoring\n> patch. It won't change the cfbot outcome though, as I also add new regression\n> tests that are for now failing (see below). I tried to keep the changes in\n> extra \"FIXUP\" patches when possible, but the API changes in the first patch\n> cause conflicts in the next one, so the big session variable patch has to\n> contain the needed changes.\n> \n> In this patchset, I also changed the following:\n> \n> - global pass on the comments in session_variable\n> - removed now useless sessionvars_types\n> - added missing prototypes for static functions (for consistency), and moved\n> all the static functions before the static function\n> - minor other nitpicking / stylistic changes\n> \n> Here are the problems I found:\n> \n> - IdentifyVariable()\n> \n> \t\t/*\n> \t\t * Lock relation. This will also accept any pending invalidation\n> \t\t * messages. If we got back InvalidOid, indicating not found, then\n> \t\t * there's nothing to lock, but we accept invalidation messages\n> \t\t * anyway, to flush any negative catcache entries that may be\n> \t\t * lingering.\n> \t\t */\n> +\t\tif (!OidIsValid(varid))\n> +\t\t\tAcceptInvalidationMessages();\n> +\t\telse if (OidIsValid(varid))\n> +\t\t\tLockDatabaseObject(VariableRelationId, varid, 0, AccessShareLock);\n> +\n> +\t\tif (inval_count == SharedInvalidMessageCounter)\n> +\t\t\tbreak;\n> +\n> +\t\tretry = true;\n> +\t\told_varid = varid;\n> +\t}\n> \n> AFAICS it's correct, but just to be extra cautious I'd explicitly set varid to\n> InvalidOid before looping, so you restart in the same condition as the first\n> iteration (since varid is initialize when declared). Also, the comments should\n> be modified, it's \"Lock variable\", not \"Lock relation\", same for the comment in\n> the previous chunk (\"we've locked the relation that used to have this\n> name...\").\n> \n> +Datum\n> +pg_debug_show_used_session_variables(PG_FUNCTION_ARGS)\n> +{\n> +[...]\n> +\t\t\telse\n> +\t\t\t{\n> +\t\t\t\t/*\n> +\t\t\t\t * When session variable was removed from catalog, but still\n> +\t\t\t\t * it in memory. The memory was not purged yet.\n> +\t\t\t\t */\n> +\t\t\t\tnulls[1] = true;\n> +\t\t\t\tnulls[2] = true;\n> +\t\t\t\tnulls[4] = true;\n> +\t\t\t\tvalues[5] = BoolGetDatum(true);\n> +\t\t\t\tnulls[6] = true;\n> +\t\t\t\tnulls[7] = true;\n> +\t\t\t\tnulls[8] = true;\n> +\t\t\t}\n> \n> I'm wondering if we could try to improve things a bit here. Maybe display the\n> variable oid instead of its name as we still have that information, the type\n> (using FORMAT_TYPE_ALLOW_INVALID as there's no guarantee that the type would\n> still exist without the dependency) and whether the variable is valid (at least\n> per its stored value). We can keep NULL for the privileges, as there's no API\n> avoid erroring if the role has been dropped.\n> \n> +{ oid => '8488', descr => 'debug list of used session variables',\n> + proname => 'pg_debug_show_used_session_variables', prorows => '1000', proretset => 't',\n> + provolatile => 's', prorettype => 'record', proargtypes => '',\n> + proallargtypes => '{oid,text,text,oid,text,bool,bool,bool,bool}',\n> + proargmodes => '{o,o,o,o,o,o,o,o,o}',\n> + proargnames => '{varid,schema,name,typid,typname,removed,has_value,can_read,can_write}',\n> \n> Since we change READ / WRITE acl for SELECT / UPDATE, we should rename the\n> column can_select and can_update.\n> \n> +static void\n> +pg_variable_cache_callback(Datum arg, int cacheid, uint32 hashvalue)\n> +{\n> + [...]\n> +\twhile ((svar = (SVariable) hash_seq_search(&status)) != NULL)\n> +\t{\n> +\t\tif (hashvalue == 0 || svar->hashvalue == hashvalue)\n> +\t\t{\n> + [...]\n> +\t\t\txact_recheck_varids = list_append_unique_oid(xact_recheck_varids,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t svar->varid);\n> \n> This has a pretty terrible complexity. It can degenerate badly, and there\n> isn't any CHECK_FOR_INTERRUPTS so you could easily lock a backend for quite\n> some time.\n> \n> I think we should just keep appending oids, and do a list_sort(list,\n> list_oid_cmp) and list_deduplicate_oid(list) before processing the list, in\n> sync_sessionvars_all() and AtPreEOXact_SessionVariable_on_xact_actions().\n> \n> Maybe while at it we could reuse sync_sessionvars_all in\n> AtPreEOXact_SessionVariable_on_xact_actions (with a way to ask\n> for the lxid check or not), rather than duplicating the whole logic twice?\n> \n> +/*\n> + * Fast drop of the complete content of all session variables hash table.\n> + * This is code for DISCARD VARIABLES command. This command\n> + * cannot be run inside transaction, so we don't need to handle\n> + * end of transaction actions.\n> + */\n> +void\n> +ResetSessionVariables(void)\n> +{\n> +\t/* Destroy hash table and reset related memory context */\n> +\tif (sessionvars)\n> +\t{\n> +\t\thash_destroy(sessionvars);\n> +\t\tsessionvars = NULL;\n> +\n> +\t\thash_destroy(sessionvars_types);\n> +\t\tsessionvars_types = NULL;\n> +\t}\n> +\n> +\t/* Release memory allocated by session variables */\n> +\tif (SVariableMemoryContext != NULL)\n> +\t\tMemoryContextReset(SVariableMemoryContext);\n> +\n> +\t/*\n> +\t * There are not any session variables left, so simply trim xact\n> +\t * action list, and other lists.\n> +\t */\n> +\tlist_free_deep(xact_on_commit_actions);\n> +\txact_on_commit_actions = NIL;\n> +\n> +\t/* We should clean xact_reset_varids */\n> +\tlist_free(xact_reset_varids);\n> +\txact_reset_varids = NIL;\n> +\n> +\t/* we should clean xact_recheck_varids */\n> +\tlist_free(xact_recheck_varids);\n> +\txact_recheck_varids = NIL;\n> +}\n> \n> The initial comment is wrong. This function is used for both DISCARD VARIABLES\n> and DISCARD ALL, but only DISCARD ALL isn't allowed in a transaction (I fixed\n> the comment in the attached patchset).\n> We should allow DISCARD VARIABLES in a transaction, therefore it needs some\n> more thinking on which list can be freed, and in which context it should hold\n> its data. AFAICS the only problematic case is ON COMMIT DROP, but an extra\n> check wouldn't hurt. For instance:\n> \n> rjuju=# BEGIN;\n> BEGIN\n> \n> rjuju=# CREATE TEMP VARIABLE v AS int ON COMMIT DROP;\n> CREATE VARIABLE\n> \n> rjuju=# DISCARD VARIABLES ;\n> DISCARD VARIABLES\n> \n> rjuju=# COMMIT;\n> COMMIT\n> \n> rjuju=# \\dV\n> List of variables\n> Schema | Name | Type | Collation | Nullable | Mutable | Default | Owner | Transactional end action\n> -----------+------+---------+-----------+----------+---------+---------+-------+--------------------------\n> pg_temp_3 | v | integer | | t | t | | rjuju | ON COMMIT DROP\n> (1 row)\n> \n> Note that I still think that keeping a single List for both SVariableXActAction\n> helps for readability, even if it means cherry-picking which items should be\n> removed on DISCARD VARIABLES (which shouldn't be a very frequent operation\n> anyway).\n> \n> Also, xact_recheck_varids is allocated in SVariableMemoryContext, so DISCARD\n> VARIABLE will crash if there's any pending recheck action.\n> \n> There's only one regression test for DISCARD VARIABLE, which clearly wasn't\n> enough. There should be one for the ON COMMIT DROP (which can be added in\n> normal regression test), one one with all action list populated (that need to\n> be in isolation tester). Both are added in the patchset in a suggestion patch,\n> and for now the first test fails and the second crashes.\n> \n> \n> - set_session_variable() is documented to either succeed or not change the\n> currently set value. While it's globally true, I see 2 things that could be\n> problematic:\n> \n> - free_session_variable_value() could technically fail. However, I don't see\n> how it could be happening unless there's a memory corruption, so this would\n> result in either an abort, or a backend in a very bad state. Anyway, since\n> pfree() can clearly ereport(ERROR) we should probably do something about\n> it. That being said, I don't really see the point of trying to preserve a\n> value that looks like random pointer, which will probably cause a segfault\n> the next time it's used. Maybe add a PG_TRY block around the call and mark\n> it as invalid (and set freeval to false) if that happens?\n> \n> - the final elog(DEBUG1) can also fail. It also seems highly unlikely, so\n> maybe accept that this exception is ok? For now I'm adding such a comment\n> in a suggestion patch.\n> \n> - prepare_variable_for_reading() and SetSessionVariable():\n> \n> +\t/* Ensure so all entries in sessionvars hash table are valid */\n> +\tsync_sessionvars_all();\n> +\n> +\t/* Protect used session variable against drop until transaction end */\n> +\tLockDatabaseObject(VariableRelationId, varid, 0, AccessShareLock);\n> \n> It's possible that a session variable is dropped after calling\n> sync_sessionvars_all(), and we would receive the sinval when acquiring the lock\n> on VariableRelationId but not process it until the next sync_sessionvars_all\n> call. I think we should acquire the lock first and then call\n> sync_sessionvars_all. I did that in the suggestion patch.\n\nRebased patcshet against recent conflicts, thanks to Pavel for the reminder.\n\nWhile sending a new patch, I realized that I forgot mentionning this in\nexecMain.c:\n\n@@ -200,6 +201,61 @@ standard_ExecutorStart(QueryDesc *queryDesc, int eflags)\n \tAssert(queryDesc->sourceText != NULL);\n \testate->es_sourceText = queryDesc->sourceText;\n\n+\t/*\n+\t * The executor doesn't work with session variables directly. Values of\n+\t * related session variables are copied to dedicated array, and this array\n+\t * is passed to executor.\n+\t */\n+\tif (queryDesc->num_session_variables > 0)\n+\t{\n+\t\t/*\n+\t\t * When paralel access to query parameters (including related session\n+\t\t * variables) is required, then related session variables are restored\n+\t\t * (deserilized) in queryDesc already. So just push pointer of this\n+\t\t * array to executor's estate.\n+\t\t */\n+\t\testate->es_session_variables = queryDesc->session_variables;\n+\t\testate->es_num_session_variables = queryDesc->num_session_variables;\n+\t}\n+\telse if (queryDesc->plannedstmt->sessionVariables)\n+\t{\n+\t\tListCell *lc;\n+\t\tint\t\t\tnSessionVariables;\n+\t\tint\t\t\ti = 0;\n+\n+\t\t/*\n+\t\t * In this case, the query uses session variables, but we have to\n+\t\t * prepare the array with passed values (of used session variables)\n+\t\t * first.\n+\t\t */\n+\t\tnSessionVariables = list_length(queryDesc->plannedstmt->sessionVariables);\n+\n+\t\t/* Create the array used for passing values of used session variables */\n+\t\testate->es_session_variables = (SessionVariableValue *)\n+\t\t\tpalloc(nSessionVariables * sizeof(SessionVariableValue));\n+\n+\t\t/* Fill the array */\n+\t\t[...]\n+\n+\t\testate->es_num_session_variables = nSessionVariables;\n+\t}\n\nI haven't looked at that part yet, but the comments are a bit obscure. IIUC\nthe first branch is for parallel workers only, if the main backend provided the\narray, and the 2nd chunk is for the main backend. If so, it could be made\nclearer, and maybe add an assert about IsParallelWorker() (or\n!IsParallelWorker()) as needed?", "msg_date": "Thu, 22 Sep 2022 14:40:59 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nThe patch has rotten again, sending an updated version. Also, after\ntalking with Pavel, he can't work on this patch before a few days so\nI'm adding some extra fixup patches for the things I reported in the\nlast few emails, so that the cfbot can hopefully turn green.\n\nOn Thu, Sep 22, 2022 at 2:41 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Fri, Sep 16, 2022 at 11:59:04AM +0800, Julien Rouhaud wrote:\n> > Hi,\n> >\n> > On Sun, Sep 11, 2022 at 09:29:49PM +0200, Pavel Stehule wrote:\n> > >>\n> > >> Originally it was not possible, because there was no xact_reset_varids list, and without this list the processing\n> > >> ON_COMMIT_DROP started DROP VARIABLE command, and there was a request for ON_COMMIT_RESET action.\n> > >> Now, it is possible, because in RemoveSessionVariable is conditional execution:\n> > >>\n> > >> <--><--><-->if (!svar->eox_reset)\n> > >> <--><--><--><-->register_session_variable_xact_action(varid,\n> > >> <--><--><--><--><--><--><--><--><--><--><--><--><--> SVAR_ON_COMMIT_RESET);\n> > >> <--><-->}\n> > >>\n> > >> So when we process ON_COMMIT_DROP actions, we know that the reset will not be processed by ON_COMMIT_RESET action,\n> > >> and then these lists can be merged.\n> > >>\n> > >> so I merged these two lists to one\n> >\n> > Thanks! This really helps with code readability, and after looking at it I\n> > found some issues (see below).\n> > >\n> > > changes:\n> > >\n> > > - some minor cleaning\n> > > - refactoring of RemoveSessionVariable - move part of code to pg_variable.c\n> >\n> > Thanks. I think we could still do more to split what code belongs to\n> > pg_variable.c and session_variable.c. In my opinion, the various DDL code\n> > should only invoke functions in pg_variable.c, which themselves can call\n> > function in session_variable.c if needed, and session_variable shouldn't know\n> > about CreateSessionVarStmt (which should probably be rename\n> > CreateVariableStmt?) or VariableRelationId. After an off-list bikeshedding\n> > session with Pavel, we came up with SessionVariableCreatePostprocess() and\n> > SessionVariableDropPostprocess() for the functions in session_variable.c called\n> > by pg_variable.c when handling CREATE VARIABLE and DROP VARIABLE commands.\n> >\n> > I'm attaching a new patchset with this change and some more (see below). I'm\n> > not sending .txt files as this is rebased on top on the recent GUC refactoring\n> > patch. It won't change the cfbot outcome though, as I also add new regression\n> > tests that are for now failing (see below). I tried to keep the changes in\n> > extra \"FIXUP\" patches when possible, but the API changes in the first patch\n> > cause conflicts in the next one, so the big session variable patch has to\n> > contain the needed changes.\n> >\n> > In this patchset, I also changed the following:\n> >\n> > - global pass on the comments in session_variable\n> > - removed now useless sessionvars_types\n> > - added missing prototypes for static functions (for consistency), and moved\n> > all the static functions before the static function\n> > - minor other nitpicking / stylistic changes\n> >\n> > Here are the problems I found:\n> >\n> > - IdentifyVariable()\n> >\n> > /*\n> > * Lock relation. This will also accept any pending invalidation\n> > * messages. If we got back InvalidOid, indicating not found, then\n> > * there's nothing to lock, but we accept invalidation messages\n> > * anyway, to flush any negative catcache entries that may be\n> > * lingering.\n> > */\n> > + if (!OidIsValid(varid))\n> > + AcceptInvalidationMessages();\n> > + else if (OidIsValid(varid))\n> > + LockDatabaseObject(VariableRelationId, varid, 0, AccessShareLock);\n> > +\n> > + if (inval_count == SharedInvalidMessageCounter)\n> > + break;\n> > +\n> > + retry = true;\n> > + old_varid = varid;\n> > + }\n> >\n> > AFAICS it's correct, but just to be extra cautious I'd explicitly set varid to\n> > InvalidOid before looping, so you restart in the same condition as the first\n> > iteration (since varid is initialize when declared). Also, the comments should\n> > be modified, it's \"Lock variable\", not \"Lock relation\", same for the comment in\n> > the previous chunk (\"we've locked the relation that used to have this\n> > name...\").\n> >\n> > +Datum\n> > +pg_debug_show_used_session_variables(PG_FUNCTION_ARGS)\n> > +{\n> > +[...]\n> > + else\n> > + {\n> > + /*\n> > + * When session variable was removed from catalog, but still\n> > + * it in memory. The memory was not purged yet.\n> > + */\n> > + nulls[1] = true;\n> > + nulls[2] = true;\n> > + nulls[4] = true;\n> > + values[5] = BoolGetDatum(true);\n> > + nulls[6] = true;\n> > + nulls[7] = true;\n> > + nulls[8] = true;\n> > + }\n> >\n> > I'm wondering if we could try to improve things a bit here. Maybe display the\n> > variable oid instead of its name as we still have that information, the type\n> > (using FORMAT_TYPE_ALLOW_INVALID as there's no guarantee that the type would\n> > still exist without the dependency) and whether the variable is valid (at least\n> > per its stored value). We can keep NULL for the privileges, as there's no API\n> > avoid erroring if the role has been dropped.\n> >\n> > +{ oid => '8488', descr => 'debug list of used session variables',\n> > + proname => 'pg_debug_show_used_session_variables', prorows => '1000', proretset => 't',\n> > + provolatile => 's', prorettype => 'record', proargtypes => '',\n> > + proallargtypes => '{oid,text,text,oid,text,bool,bool,bool,bool}',\n> > + proargmodes => '{o,o,o,o,o,o,o,o,o}',\n> > + proargnames => '{varid,schema,name,typid,typname,removed,has_value,can_read,can_write}',\n> >\n> > Since we change READ / WRITE acl for SELECT / UPDATE, we should rename the\n> > column can_select and can_update.\n> >\n> > +static void\n> > +pg_variable_cache_callback(Datum arg, int cacheid, uint32 hashvalue)\n> > +{\n> > + [...]\n> > + while ((svar = (SVariable) hash_seq_search(&status)) != NULL)\n> > + {\n> > + if (hashvalue == 0 || svar->hashvalue == hashvalue)\n> > + {\n> > + [...]\n> > + xact_recheck_varids = list_append_unique_oid(xact_recheck_varids,\n> > + svar->varid);\n> >\n> > This has a pretty terrible complexity. It can degenerate badly, and there\n> > isn't any CHECK_FOR_INTERRUPTS so you could easily lock a backend for quite\n> > some time.\n> >\n> > I think we should just keep appending oids, and do a list_sort(list,\n> > list_oid_cmp) and list_deduplicate_oid(list) before processing the list, in\n> > sync_sessionvars_all() and AtPreEOXact_SessionVariable_on_xact_actions().\n> >\n> > Maybe while at it we could reuse sync_sessionvars_all in\n> > AtPreEOXact_SessionVariable_on_xact_actions (with a way to ask\n> > for the lxid check or not), rather than duplicating the whole logic twice?\n> >\n> > +/*\n> > + * Fast drop of the complete content of all session variables hash table.\n> > + * This is code for DISCARD VARIABLES command. This command\n> > + * cannot be run inside transaction, so we don't need to handle\n> > + * end of transaction actions.\n> > + */\n> > +void\n> > +ResetSessionVariables(void)\n> > +{\n> > + /* Destroy hash table and reset related memory context */\n> > + if (sessionvars)\n> > + {\n> > + hash_destroy(sessionvars);\n> > + sessionvars = NULL;\n> > +\n> > + hash_destroy(sessionvars_types);\n> > + sessionvars_types = NULL;\n> > + }\n> > +\n> > + /* Release memory allocated by session variables */\n> > + if (SVariableMemoryContext != NULL)\n> > + MemoryContextReset(SVariableMemoryContext);\n> > +\n> > + /*\n> > + * There are not any session variables left, so simply trim xact\n> > + * action list, and other lists.\n> > + */\n> > + list_free_deep(xact_on_commit_actions);\n> > + xact_on_commit_actions = NIL;\n> > +\n> > + /* We should clean xact_reset_varids */\n> > + list_free(xact_reset_varids);\n> > + xact_reset_varids = NIL;\n> > +\n> > + /* we should clean xact_recheck_varids */\n> > + list_free(xact_recheck_varids);\n> > + xact_recheck_varids = NIL;\n> > +}\n> >\n> > The initial comment is wrong. This function is used for both DISCARD VARIABLES\n> > and DISCARD ALL, but only DISCARD ALL isn't allowed in a transaction (I fixed\n> > the comment in the attached patchset).\n> > We should allow DISCARD VARIABLES in a transaction, therefore it needs some\n> > more thinking on which list can be freed, and in which context it should hold\n> > its data. AFAICS the only problematic case is ON COMMIT DROP, but an extra\n> > check wouldn't hurt. For instance:\n> >\n> > rjuju=# BEGIN;\n> > BEGIN\n> >\n> > rjuju=# CREATE TEMP VARIABLE v AS int ON COMMIT DROP;\n> > CREATE VARIABLE\n> >\n> > rjuju=# DISCARD VARIABLES ;\n> > DISCARD VARIABLES\n> >\n> > rjuju=# COMMIT;\n> > COMMIT\n> >\n> > rjuju=# \\dV\n> > List of variables\n> > Schema | Name | Type | Collation | Nullable | Mutable | Default | Owner | Transactional end action\n> > -----------+------+---------+-----------+----------+---------+---------+-------+--------------------------\n> > pg_temp_3 | v | integer | | t | t | | rjuju | ON COMMIT DROP\n> > (1 row)\n> >\n> > Note that I still think that keeping a single List for both SVariableXActAction\n> > helps for readability, even if it means cherry-picking which items should be\n> > removed on DISCARD VARIABLES (which shouldn't be a very frequent operation\n> > anyway).\n> >\n> > Also, xact_recheck_varids is allocated in SVariableMemoryContext, so DISCARD\n> > VARIABLE will crash if there's any pending recheck action.\n> >\n> > There's only one regression test for DISCARD VARIABLE, which clearly wasn't\n> > enough. There should be one for the ON COMMIT DROP (which can be added in\n> > normal regression test), one one with all action list populated (that need to\n> > be in isolation tester). Both are added in the patchset in a suggestion patch,\n> > and for now the first test fails and the second crashes.\n> >\n> >\n> > - set_session_variable() is documented to either succeed or not change the\n> > currently set value. While it's globally true, I see 2 things that could be\n> > problematic:\n> >\n> > - free_session_variable_value() could technically fail. However, I don't see\n> > how it could be happening unless there's a memory corruption, so this would\n> > result in either an abort, or a backend in a very bad state. Anyway, since\n> > pfree() can clearly ereport(ERROR) we should probably do something about\n> > it. That being said, I don't really see the point of trying to preserve a\n> > value that looks like random pointer, which will probably cause a segfault\n> > the next time it's used. Maybe add a PG_TRY block around the call and mark\n> > it as invalid (and set freeval to false) if that happens?\n> >\n> > - the final elog(DEBUG1) can also fail. It also seems highly unlikely, so\n> > maybe accept that this exception is ok? For now I'm adding such a comment\n> > in a suggestion patch.\n> >\n> > - prepare_variable_for_reading() and SetSessionVariable():\n> >\n> > + /* Ensure so all entries in sessionvars hash table are valid */\n> > + sync_sessionvars_all();\n> > +\n> > + /* Protect used session variable against drop until transaction end */\n> > + LockDatabaseObject(VariableRelationId, varid, 0, AccessShareLock);\n> >\n> > It's possible that a session variable is dropped after calling\n> > sync_sessionvars_all(), and we would receive the sinval when acquiring the lock\n> > on VariableRelationId but not process it until the next sync_sessionvars_all\n> > call. I think we should acquire the lock first and then call\n> > sync_sessionvars_all. I did that in the suggestion patch.\n>\n> Rebased patcshet against recent conflicts, thanks to Pavel for the reminder.\n>\n> While sending a new patch, I realized that I forgot mentionning this in\n> execMain.c:\n>\n> @@ -200,6 +201,61 @@ standard_ExecutorStart(QueryDesc *queryDesc, int eflags)\n> Assert(queryDesc->sourceText != NULL);\n> estate->es_sourceText = queryDesc->sourceText;\n>\n> + /*\n> + * The executor doesn't work with session variables directly. Values of\n> + * related session variables are copied to dedicated array, and this array\n> + * is passed to executor.\n> + */\n> + if (queryDesc->num_session_variables > 0)\n> + {\n> + /*\n> + * When paralel access to query parameters (including related session\n> + * variables) is required, then related session variables are restored\n> + * (deserilized) in queryDesc already. So just push pointer of this\n> + * array to executor's estate.\n> + */\n> + estate->es_session_variables = queryDesc->session_variables;\n> + estate->es_num_session_variables = queryDesc->num_session_variables;\n> + }\n> + else if (queryDesc->plannedstmt->sessionVariables)\n> + {\n> + ListCell *lc;\n> + int nSessionVariables;\n> + int i = 0;\n> +\n> + /*\n> + * In this case, the query uses session variables, but we have to\n> + * prepare the array with passed values (of used session variables)\n> + * first.\n> + */\n> + nSessionVariables = list_length(queryDesc->plannedstmt->sessionVariables);\n> +\n> + /* Create the array used for passing values of used session variables */\n> + estate->es_session_variables = (SessionVariableValue *)\n> + palloc(nSessionVariables * sizeof(SessionVariableValue));\n> +\n> + /* Fill the array */\n> + [...]\n> +\n> + estate->es_num_session_variables = nSessionVariables;\n> + }\n>\n> I haven't looked at that part yet, but the comments are a bit obscure. IIUC\n> the first branch is for parallel workers only, if the main backend provided the\n> array, and the 2nd chunk is for the main backend. If so, it could be made\n> clearer, and maybe add an assert about IsParallelWorker() (or\n> !IsParallelWorker()) as needed?\n\nFull list of changes:\n - rebased against multiple conflicts since last version\n - fixed the meson build\n - fixed the ON COMMIT DROP problem and the crash on RESET VARIABLES\n - fixed some copy/pasto in the expected isolation tests (visible now\nthat it works)\n - added the asserts and tried to clarify the comments for the\nsession variable handling in QueryDesc (I still haven't really read\nthat part)\n - did the mentioned modifications on\npg_debug_show_used_session_variables, and used CStringGetTextDatum\nmacro to simplify the code\n\nNote that while waiting for the CI to finish I noticed that the commit\nmessage for 0001 still mentions the READ/WRITE acl. The commit\nmessages will probably need a bit of rewording too once everything\nelse is fixed, but this one could be changed already.", "msg_date": "Sun, 25 Sep 2022 14:56:03 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nne 25. 9. 2022 v 8:56 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi,\n>\n> The patch has rotten again, sending an updated version. Also, after\n> talking with Pavel, he can't work on this patch before a few days so\n> I'm adding some extra fixup patches for the things I reported in the\n> last few emails, so that the cfbot can hopefully turn green.\n>\n> Note that while waiting for the CI to finish I noticed that the commit\n> message for 0001 still mentions the READ/WRITE acl. The commit\n> messages will probably need a bit of rewording too once everything\n> else is fixed, but this one could be changed already.\n>\n\nI fixed the commit message of 0001 patch. Fixed shadowed variables too.\n\nThere is a partially open issue, where I and Julien are not sure about a\nsolution, and we would like to ask for the community's opinion. I'll send\nthis query in separate mail.\n\nRegards\n\nPavel", "msg_date": "Wed, 12 Oct 2022 15:26:46 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nst 12. 10. 2022 v 15:26 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> ne 25. 9. 2022 v 8:56 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n> napsal:\n>\n>> Hi,\n>>\n>> The patch has rotten again, sending an updated version. Also, after\n>> talking with Pavel, he can't work on this patch before a few days so\n>> I'm adding some extra fixup patches for the things I reported in the\n>> last few emails, so that the cfbot can hopefully turn green.\n>>\n>> Note that while waiting for the CI to finish I noticed that the commit\n>> message for 0001 still mentions the READ/WRITE acl. The commit\n>> messages will probably need a bit of rewording too once everything\n>> else is fixed, but this one could be changed already.\n>>\n>\n> I fixed the commit message of 0001 patch. Fixed shadowed variables too.\n>\n> There is a partially open issue, where I and Julien are not sure about a\n> solution, and we would like to ask for the community's opinion. I'll send\n> this query in separate mail.\n>\n\n rebased with simplified code related to usage of pfree function\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n>\n>", "msg_date": "Thu, 13 Oct 2022 07:41:32 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nOn Thu, Oct 13, 2022 at 07:41:32AM +0200, Pavel Stehule wrote:\n>\n> > I fixed the commit message of 0001 patch. Fixed shadowed variables too.\n\nThanks!\n\n> >\n> > There is a partially open issue, where I and Julien are not sure about a\n> > solution, and we would like to ask for the community's opinion. I'll send\n> > this query in separate mail.\n> >\n>\n> rebased with simplified code related to usage of pfree function\n\nIf anyone is curious the discussion happend at [1].\n\nI looked at the patchset, this time focusing on the LET command. Here at the\ncomments I have for now:\n\n- gram.y\n\n@@ -11918,6 +11920,7 @@ ExplainableStmt:\n \t\t\t| CreateMatViewStmt\n \t\t\t| RefreshMatViewStmt\n \t\t\t| ExecuteStmt\t\t\t\t\t/* by default all are $$=$1 */\n+\t\t\t| LetStmt\n \t\t;\n\n(and other similar places) the comment should be kept to the last statement\n\nAlso, having LetStmt as an ExplainableStmt means it's allowed in a CTE:\n\ncte_list:\n\t\tcommon_table_expr\t\t\t\t\t\t{ $$ = list_make1($1); }\n\t\t| cte_list ',' common_table_expr\t\t{ $$ = lappend($1, $3); }\n\t\t;\n\ncommon_table_expr: name opt_name_list AS opt_materialized '(' PreparableStmt ')' opt_search_clause opt_cycle_clause\n\nAnd doing so hits this assert in transformWithClause:\n\n\t\tif (!IsA(cte->ctequery, SelectStmt))\n\t\t{\n\t\t\t/* must be a data-modifying statement */\n\t\t\tAssert(IsA(cte->ctequery, InsertStmt) ||\n\t\t\t\t IsA(cte->ctequery, UpdateStmt) ||\n\t\t\t\t IsA(cte->ctequery, DeleteStmt));\n\n\t\t\tpstate->p_hasModifyingCTE = true;\n\t\t}\n\nand I'm assuming it would also fail on this in transformLetStmt:\n\n+\t/* There can't be any outer WITH to worry about */\n+\tAssert(pstate->p_ctenamespace == NIL);\n\nI guess it makes sense to be able to explain a LetStmt (or using it in a\nprepared statement), so it should be properly handled in transformSelectStmt.\nAlso, I don't see any test for a prepared LET statement, this should also be\ncovered.\n\n- transformLetStmt:\n\n+\tvarid = IdentifyVariable(names, &attrname, &not_unique);\n\nIt would be nice to have a comment saying that the lock is acquired here\n\n+\t/* The grammar should have produced a SELECT */\n+\tif (!IsA(selectQuery, Query) ||\n+\t\tselectQuery->commandType != CMD_SELECT)\n+\t\telog(ERROR, \"unexpected non-SELECT command in LET command\");\n\nI'm wondering if this should be an Assert instead, as the grammar shouldn't\nproduce anything else no matter what how hard a user try.\n\n+\t/* don't allow multicolumn result */\n+\tif (list_length(exprList) != 1)\n+\t\tereport(ERROR,\n+\t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\n+\t\t\t\t errmsg(\"expression is not scalar value\"),\n+\t\t\t\t parser_errposition(pstate,\n+\t\t\t\t\t\t\t\t\texprLocation((Node *) exprList))));\n\nThis isn't covered by any regression test and it probably should. It can be\nreached with something like\n\nLET myvar = (null::pg_class).*;\n\nThe error message could also use a bit of improvement.\n\nI see that a_expr allows a select statement in parens, but this leads to a\nsublink which already has all the required protection to gurantee a single\ncolumn, and a single row at most during execution. This one returns for\nnon-scalar case:\n\nsubquery must return only one column\n\nMaybe use something similar for it, like \"expression must return only one\ncolumn\"? Similarly the error message in svariableStartupReceiver could be made\nmore consistent with the related errors:\n\n+\t\tif (++outcols > 1)\n+\t\t\telog(ERROR, \"svariable DestReceiver can take only one attribute\");\n\nWhile on svariableReceiver, I see that the current code assumes that the caller\ndid everything right. That's the case right now, but it should still be made\nmore robust in case future code (or extensions) is added. I'm thinking:\n\n- svariableState.rows. Currently not really used, should check that one and\n only one row is received in svariableReceiveSlot and\n svariableShutdownReceiver (if no row is received the variable won't be reset\n which should probably always happen once you setup an svariableReceiver)\n- svariableState.typid, typmod and typlen should be double checked with the\n given varid in svariableStartupReceiver.\n- svariableState.varid should be initialized with InvalidOid to avoid undefined\n behavior is caller forgets to set it.\n\nI'm also wondering if SetVariableDestReceiverParams() should have an assert\nlike LockHeldByMe() for the given varid, and maybe an assert that the varid is\na session variable, to avoid running a possibly expensive execution that will\nfail when receiving the slot. I think the function would be better named\nSetVariableDestReceiverVarid() or something like that.\n\n+void\n+ExecuteLetStmt(ParseState *pstate,\n+\t\t\t LetStmt *stmt,\n+\t\t\t ParamListInfo params,\n+\t\t\t QueryEnvironment *queryEnv,\n+\t\t\t QueryCompletion *qc)\n+{\n+ [...]\n+\t/* run the plan to completion */\n+\tExecutorRun(queryDesc, ForwardScanDirection, 2L, true);\n\nWhy 2 rows? I'm assuming it's an attempt to detect queries that returns more\nthan 1 row, but it should be documented. Note that as mentioned above the dest\nreceiver currently doesn't check it, so this definitely needs to be fixed.\n\n- IdentifyVariable:\n\n*attrname can be set even is no variable is identified. I guess that's ok as\nit avoids useless code, but it should probably be documented in the function\nheader.\n\nAlso, the API doesn't look ideal. AFAICS the only reason this function doesn't\nerror out in case of ambiguous name is that transformColumnRef may check if a\ngiven name shadows a variable when session_variables_ambiguity_warning is set.\nBut since IdentifyVariable returns InvalidOid if the given list of identifiers\nis ambiguous, it seems that the shadow detection can fail to detect a shadowed\nreference if multiple variable would shadow the name:\n\n# CREATE TYPE ab AS (a integer, b integer);\nCREATE TYPE\n# CREATE VARIABLE v_ab AS ab;\nCREATE VARIABLE\n\n# CREATE TABLE v_ab (a integer, b integer);\nCREATE TABLE\n\n# SET session_variables_ambiguity_warning = 1;\nSET\n\n# sELECT v_ab.a FROM v_ab;\nWARNING: 42702: session variable \"v_ab.a\" is shadowed\nLINE 1: select v_ab.a from v_ab;\n ^\nDETAIL: Session variables can be shadowed by columns, routine's variables and routine's arguments with the same name.\n a\n---\n(0 rows)\n\n# CREATE SCHEMA v_ab;\nCREATE SCHEMA\n\n# CREATE VARIABLE v_ab.a AS integer;\nCREATE VARIABLE\n\n# SELECT v_ab.a FROM v_ab;\n a\n---\n(0 rows)\n\n\nNote that a bit later in transformColumnRef(), not_unique is checked only if\nthe returned varid is valid, which isn't correct as InvalidOid is currently\nreturned if not_unique is set.\n\nI think that the error should be raised in IdentifyVariable rather than having\nevery caller check it. I'm not sure how to perfectly handle the\nsession_variables_ambiguity_warning though. Maybe make not_unique optional,\nand error out if not_unique is null. If not null, set it as necessary and\nreturn one of the oid. The only use would be for shadowing detection, and in\nthat case it won't be possible to check if a warning can be avoided as it would\nbe if no amgibuity is found, but that's probably ok.\n\nOr maybe instead LookupVariable should have an extra argument to only match\nvariable with a composite type if caller asks to. This would avoid scenarios\nlike:\n\nCREATE VARIABLE myvar AS int;\nSELECT myvar.blabla;\nERROR: 42809: type integer is not composite\n\nIs that really ok to match a variable here rather than complaining about a\nmissing FROM-clause?\n\n+\tindirection_start = list_length(names) - (attrname ? 1 : 0);\n+\tindirection = list_copy_tail(stmt->target, indirection_start);\n+ [...]\n+\t\tif (indirection != NULL)\n+\t\t{\n+\t\t\tbool\t\ttargetIsArray;\n+\t\t\tchar\t *targetName;\n+\n+\t\t\ttargetName = get_session_variable_name(varid);\n+\t\t\ttargetIsArray = OidIsValid(get_element_type(typid));\n+\n+\t\t\tpstate->p_hasSessionVariables = true;\n+\n+\t\t\tcoerced_expr = (Expr *)\n+\t\t\t\ttransformAssignmentIndirection(pstate,\n+\t\t\t\t\t\t\t\t\t\t\t (Node *) param,\n+\t\t\t\t\t\t\t\t\t\t\t targetName,\n+\t\t\t\t\t\t\t\t\t\t\t targetIsArray,\n+\t\t\t\t\t\t\t\t\t\t\t typid,\n+\t\t\t\t\t\t\t\t\t\t\t typmod,\n+\t\t\t\t\t\t\t\t\t\t\t InvalidOid,\n+\t\t\t\t\t\t\t\t\t\t\t indirection,\n+\t\t\t\t\t\t\t\t\t\t\t list_head(indirection),\n+\t\t\t\t\t\t\t\t\t\t\t (Node *) expr,\n+\t\t\t\t\t\t\t\t\t\t\t COERCION_PLPGSQL,\n+\t\t\t\t\t\t\t\t\t\t\t stmt->location);\n+\t\t}\n\nI'm not sure why you use this approach rather than just having something like\n\"ListCell *indirection_head\", set it to a non-NULL value when needed, and use\nthat (with names) instead. Note that it's also not correct to compare a List\nto NULL, use NIL instead.\n\n- expr_kind_allows_session_variables\n\nEven if that's a bit annoying, I think it's better to explicitly put all values\nthere rather than having a default clause.\n\nFor instance, EXPR_KIND_CYCLE_MARK is currently allowing session variables,\nwhich doesn't look ok. It's probably just an error from when the patchset was\nrebased, but this probably wouldn't happen if you get an error for an unmatched\nvalue if you add a new expr kind (which doesn't happen that often).\n\n[1] https://www.postgresql.org/message-id/CAFj8pRB2+pVBFsidS-AzhHdZid40OTUspWfXS0vgahHmaWosZQ@mail.gmail.com\n\n\n", "msg_date": "Mon, 17 Oct 2022 11:17:43 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "> On Thu, Oct 13, 2022 at 07:41:32AM +0200, Pavel Stehule wrote:\n> rebased with simplified code related to usage of pfree function\n\nThanks for the patch, great work!\n\nI've got a couple of questions, although I haven't fully finished reviewing yet\n(so more to come):\n\n* I'm curious about ALTER VARIABLE. Current implementation allows altering only\n the name, schema or the owner -- why not e.g. immutability?\n\n* psql tab completion implementation mentions that CREATE VARIABLE could be\n used inside CREATE SCHEMA:\n\n /* CREATE VARIABLE --- is allowed inside CREATE SCHEMA, so use TailMatches */\n /* Complete CREATE VARIABLE <name> with AS */\n else if (TailMatches(\"IMMUTABLE\"))\n\n Is that correct? It doesn't like it works, and from what I see it requires\n some modifications in transformCreateSchemaStmt and schema_stmt.\n\n* psql describe mentions the following:\n\n\t/*\n\t * Most functions in this file are content to print an empty table when\n\t * there are no matching objects. We intentionally deviate from that\n\t * here, but only in !quiet mode, for historical reasons.\n\t */\n\n I guess it's taken from listTables, and the extended versions says \"because\n of the possibility that the user is confused about what the two pattern\n arguments mean\". Are those historical reasons apply to variables as well?\n\n\n", "msg_date": "Sun, 30 Oct 2022 19:05:42 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nne 30. 10. 2022 v 19:05 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\nnapsal:\n\n> > On Thu, Oct 13, 2022 at 07:41:32AM +0200, Pavel Stehule wrote:\n> > rebased with simplified code related to usage of pfree function\n>\n> Thanks for the patch, great work!\n>\n> I've got a couple of questions, although I haven't fully finished\n> reviewing yet\n> (so more to come):\n>\n> * I'm curious about ALTER VARIABLE. Current implementation allows altering\n> only\n> the name, schema or the owner -- why not e.g. immutability?\n>\n\nIt is just in category - \"not implemented yet\". The name, schema or owner\ndoesn't change behavior. It can be possible (in next versions) to change\ntype, default expression, immutability (I think). But the patch is long\nenough so I prefer just to support basic generic ALTER related to schema,\nand other possibilities to implement in next iterations.\n\n\n>\n> * psql tab completion implementation mentions that CREATE VARIABLE could be\n> used inside CREATE SCHEMA:\n>\n> /* CREATE VARIABLE --- is allowed inside CREATE SCHEMA, so use\n> TailMatches */\n> /* Complete CREATE VARIABLE <name> with AS */\n> else if (TailMatches(\"IMMUTABLE\"))\n>\n> Is that correct? It doesn't like it works, and from what I see it\n> requires\n> some modifications in transformCreateSchemaStmt and schema_stmt.\n>\n\nyes,\n\nThis is a bug. It should be fixed\n\n\n\n>\n> * psql describe mentions the following:\n>\n> /*\n> * Most functions in this file are content to print an empty table\n> when\n> * there are no matching objects. We intentionally deviate from\n> that\n> * here, but only in !quiet mode, for historical reasons.\n> */\n>\n> I guess it's taken from listTables, and the extended versions says\n> \"because\n> of the possibility that the user is confused about what the two pattern\n> arguments mean\". Are those historical reasons apply to variables as well?\n>\n\nThe behave is same like the tables\n\n(2022-10-30 19:48:14) postgres=# \\dt\nDid not find any relations.\n(2022-10-30 19:48:16) postgres=# \\dV\nDid not find any session variables.\n\nThank you for comments\n\nPavel\n\nHine 30. 10. 2022 v 19:05 odesílatel Dmitry Dolgov <9erthalion6@gmail.com> napsal:> On Thu, Oct 13, 2022 at 07:41:32AM +0200, Pavel Stehule wrote:\n>  rebased with simplified code related to usage of pfree function\n\nThanks for the patch, great work!\n\nI've got a couple of questions, although I haven't fully finished reviewing yet\n(so more to come):\n\n* I'm curious about ALTER VARIABLE. Current implementation allows altering only\n  the name, schema or the owner -- why not e.g. immutability?It is just in category - \"not implemented yet\". The name, schema or owner doesn't change behavior. It can be possible (in next versions) to change type, default expression, immutability (I think). But the patch is long enough so I prefer just to support basic generic ALTER related to schema, and other possibilities to implement in next iterations.  \n\n* psql tab completion implementation mentions that CREATE VARIABLE could be\n  used inside CREATE SCHEMA:\n\n    /* CREATE VARIABLE --- is allowed inside CREATE SCHEMA, so use TailMatches */\n    /* Complete CREATE VARIABLE <name> with AS */\n    else if (TailMatches(\"IMMUTABLE\"))\n\n  Is that correct? It doesn't like it works, and from what I see it requires\n  some modifications in transformCreateSchemaStmt and schema_stmt.yes,This is a bug. It should be fixed \n\n* psql describe mentions the following:\n\n        /*\n         * Most functions in this file are content to print an empty table when\n         * there are no matching objects.  We intentionally deviate from that\n         * here, but only in !quiet mode, for historical reasons.\n         */\n\n  I guess it's taken from listTables, and the extended versions says \"because\n  of the possibility that the user is confused about what the two pattern\n  arguments mean\". Are those historical reasons apply to variables as well?The behave is same like the tables(2022-10-30 19:48:14) postgres=# \\dtDid not find any relations.(2022-10-30 19:48:16) postgres=# \\dVDid not find any session variables.Thank you for commentsPavel", "msg_date": "Sun, 30 Oct 2022 19:49:41 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\npo 17. 10. 2022 v 5:17 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> Hi,\n>\n> On Thu, Oct 13, 2022 at 07:41:32AM +0200, Pavel Stehule wrote:\n> >\n> > > I fixed the commit message of 0001 patch. Fixed shadowed variables too.\n>\n> Thanks!\n>\n> > >\n> > > There is a partially open issue, where I and Julien are not sure about\n> a\n> > > solution, and we would like to ask for the community's opinion. I'll\n> send\n> > > this query in separate mail.\n> > >\n> >\n> > rebased with simplified code related to usage of pfree function\n>\n> If anyone is curious the discussion happend at [1].\n>\n> I looked at the patchset, this time focusing on the LET command. Here at\n> the\n> comments I have for now:\n>\n> - gram.y\n>\n> @@ -11918,6 +11920,7 @@ ExplainableStmt:\n> | CreateMatViewStmt\n> | RefreshMatViewStmt\n> | ExecuteStmt /*\n> by default all are $$=$1 */\n> + | LetStmt\n> ;\n>\n> (and other similar places) the comment should be kept to the last statement\n>\n\nfixed\n\n\n>\n> Also, having LetStmt as an ExplainableStmt means it's allowed in a CTE:\n>\n> cte_list:\n> common_table_expr\n> { $$ = list_make1($1); }\n> | cte_list ',' common_table_expr { $$ =\n> lappend($1, $3); }\n> ;\n>\n> common_table_expr: name opt_name_list AS opt_materialized '('\n> PreparableStmt ')' opt_search_clause opt_cycle_clause\n>\n> And doing so hits this assert in transformWithClause:\n>\n> if (!IsA(cte->ctequery, SelectStmt))\n> {\n> /* must be a data-modifying statement */\n> Assert(IsA(cte->ctequery, InsertStmt) ||\n> IsA(cte->ctequery, UpdateStmt) ||\n> IsA(cte->ctequery, DeleteStmt));\n>\n> pstate->p_hasModifyingCTE = true;\n> }\n>\n> and I'm assuming it would also fail on this in transformLetStmt:\n>\n> + /* There can't be any outer WITH to worry about */\n> + Assert(pstate->p_ctenamespace == NIL);\n>\n> I guess it makes sense to be able to explain a LetStmt (or using it in a\n> prepared statement), so it should be properly handled in\n> transformSelectStmt.\n> Also, I don't see any test for a prepared LET statement, this should also\n> be\n> covered.\n>\n\nThe LET statement doesn't return data, so it should be disallowed similar\nlike MERGE statement\n\nI enhanced the regression test about PREPARE of the LET statement. I found\nand fix the missing plan dependency of target variable of LET command\n\n\n\n> - transformLetStmt:\n>\n> + varid = IdentifyVariable(names, &attrname, &not_unique);\n>\n> It would be nice to have a comment saying that the lock is acquired here\n>\n\ndone\n\n\n>\n> + /* The grammar should have produced a SELECT */\n> + if (!IsA(selectQuery, Query) ||\n> + selectQuery->commandType != CMD_SELECT)\n> + elog(ERROR, \"unexpected non-SELECT command in LET\n> command\");\n>\n> I'm wondering if this should be an Assert instead, as the grammar shouldn't\n> produce anything else no matter what how hard a user try.\n>\n\ndone\n\n\n>\n> + /* don't allow multicolumn result */\n> + if (list_length(exprList) != 1)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_SYNTAX_ERROR),\n> + errmsg(\"expression is not scalar value\"),\n> + parser_errposition(pstate,\n> +\n> exprLocation((Node *) exprList))));\n>\n> This isn't covered by any regression test and it probably should. It can\n> be\n> reached with something like\n>\n> LET myvar = (null::pg_class).*;\n>\n> The error message could also use a bit of improvement.\n>\n\ndone - the error message is like related plpgsql error message\n\n\n>\n> I see that a_expr allows a select statement in parens, but this leads to a\n> sublink which already has all the required protection to gurantee a single\n> column, and a single row at most during execution. This one returns for\n> non-scalar case:\n>\n> subquery must return only one column\n>\n> Maybe use something similar for it, like \"expression must return only one\n> column\"? Similarly the error message in svariableStartupReceiver could be\n> made\n> more consistent with the related errors:\n>\n> + if (++outcols > 1)\n> + elog(ERROR, \"svariable DestReceiver can take only\n> one attribute\");\n>\n\ndone\n\n\n>\n> While on svariableReceiver, I see that the current code assumes that the\n> caller\n> did everything right. That's the case right now, but it should still be\n> made\n> more robust in case future code (or extensions) is added. I'm thinking:\n>\n> - svariableState.rows. Currently not really used, should check that one\n> and\n> only one row is received in svariableReceiveSlot and\n> svariableShutdownReceiver (if no row is received the variable won't be\n> reset\n> which should probably always happen once you setup an svariableReceiver)\n>\n\ndone\n\n\n> - svariableState.typid, typmod and typlen should be double checked with the\n> given varid in svariableStartupReceiver.\n>\n\ndone\n\n\n> - svariableState.varid should be initialized with InvalidOid to avoid\n> undefined\n> behavior is caller forgets to set it.\n>\n\nsvariableState is initialized by palloc0\n\n\n>\n> I'm also wondering if SetVariableDestReceiverParams() should have an assert\n> like LockHeldByMe() for the given varid,\n\n\ndone\n\n\n> and maybe an assert that the varid is\n> a session variable, to avoid running a possibly expensive execution that\n> will\n>\n\ndone\n\n\n> fail when receiving the slot. I think the function would be better named\n> SetVariableDestReceiverVarid() or something like that.\n>\n\ndone\n\n\n>\n> +void\n> +ExecuteLetStmt(ParseState *pstate,\n> + LetStmt *stmt,\n> + ParamListInfo params,\n> + QueryEnvironment *queryEnv,\n> + QueryCompletion *qc)\n> +{\n> + [...]\n> + /* run the plan to completion */\n> + ExecutorRun(queryDesc, ForwardScanDirection, 2L, true);\n>\n> Why 2 rows? I'm assuming it's an attempt to detect queries that returns\n> more\n> than 1 row, but it should be documented. Note that as mentioned above the\n> dest\n> receiver currently doesn't check it, so this definitely needs to be fixed.\n>\n\ndone + check + tests\n\n\n\n>\n> - IdentifyVariable:\n>\n> *attrname can be set even is no variable is identified. I guess that's ok\n> as\n> it avoids useless code, but it should probably be documented in the\n> function\n> header.\n>\n\nThis is a side effect. The attrname is used only when the returned oid is\nvalid. I checked code, and\nI extended comments on the function.\n\nI am sending updated patch, next points I'll process tomorrow\n\n\n\n\n\n>\n> Also, the API doesn't look ideal. AFAICS the only reason this function\n> doesn't\n> error out in case of ambiguous name is that transformColumnRef may check\n> if a\n> given name shadows a variable when session_variables_ambiguity_warning is\n> set.\n> But since IdentifyVariable returns InvalidOid if the given list of\n> identifiers\n> is ambiguous, it seems that the shadow detection can fail to detect a\n> shadowed\n> reference if multiple variable would shadow the name:\n>\n> # CREATE TYPE ab AS (a integer, b integer);\n> CREATE TYPE\n> # CREATE VARIABLE v_ab AS ab;\n> CREATE VARIABLE\n>\n> # CREATE TABLE v_ab (a integer, b integer);\n> CREATE TABLE\n>\n> # SET session_variables_ambiguity_warning = 1;\n> SET\n>\n> # sELECT v_ab.a FROM v_ab;\n> WARNING: 42702: session variable \"v_ab.a\" is shadowed\n> LINE 1: select v_ab.a from v_ab;\n> ^\n> DETAIL: Session variables can be shadowed by columns, routine's variables\n> and routine's arguments with the same name.\n> a\n> ---\n> (0 rows)\n>\n> # CREATE SCHEMA v_ab;\n> CREATE SCHEMA\n>\n> # CREATE VARIABLE v_ab.a AS integer;\n> CREATE VARIABLE\n>\n> # SELECT v_ab.a FROM v_ab;\n> a\n> ---\n> (0 rows)\n>\n>\n> Note that a bit later in transformColumnRef(), not_unique is checked only\n> if\n> the returned varid is valid, which isn't correct as InvalidOid is currently\n> returned if not_unique is set.\n>\n> I think that the error should be raised in IdentifyVariable rather than\n> having\n> every caller check it. I'm not sure how to perfectly handle the\n> session_variables_ambiguity_warning though. Maybe make not_unique\n> optional,\n> and error out if not_unique is null. If not null, set it as necessary and\n> return one of the oid. The only use would be for shadowing detection, and\n> in\n> that case it won't be possible to check if a warning can be avoided as it\n> would\n> be if no amgibuity is found, but that's probably ok.\n>\n> Or maybe instead LookupVariable should have an extra argument to only match\n> variable with a composite type if caller asks to. This would avoid\n> scenarios\n> like:\n>\n> CREATE VARIABLE myvar AS int;\n> SELECT myvar.blabla;\n> ERROR: 42809: type integer is not composite\n>\n> Is that really ok to match a variable here rather than complaining about a\n> missing FROM-clause?\n>\n> + indirection_start = list_length(names) - (attrname ? 1 : 0);\n> + indirection = list_copy_tail(stmt->target, indirection_start);\n> + [...]\n> + if (indirection != NULL)\n> + {\n> + bool targetIsArray;\n> + char *targetName;\n> +\n> + targetName = get_session_variable_name(varid);\n> + targetIsArray =\n> OidIsValid(get_element_type(typid));\n> +\n> + pstate->p_hasSessionVariables = true;\n> +\n> + coerced_expr = (Expr *)\n> + transformAssignmentIndirection(pstate,\n> +\n> (Node *) param,\n> +\n> targetName,\n> +\n> targetIsArray,\n> +\n> typid,\n> +\n> typmod,\n> +\n> InvalidOid,\n> +\n> indirection,\n> +\n> list_head(indirection),\n> +\n> (Node *) expr,\n> +\n> COERCION_PLPGSQL,\n> +\n> stmt->location);\n> + }\n>\n> I'm not sure why you use this approach rather than just having something\n> like\n> \"ListCell *indirection_head\", set it to a non-NULL value when needed, and\n> use\n> that (with names) instead. Note that it's also not correct to compare a\n> List\n> to NULL, use NIL instead.\n>\n> - expr_kind_allows_session_variables\n>\n> Even if that's a bit annoying, I think it's better to explicitly put all\n> values\n> there rather than having a default clause.\n>\n> For instance, EXPR_KIND_CYCLE_MARK is currently allowing session variables,\n> which doesn't look ok. It's probably just an error from when the patchset\n> was\n> rebased, but this probably wouldn't happen if you get an error for an\n> unmatched\n> value if you add a new expr kind (which doesn't happen that often).\n>\n\n+ fixed issue reported by Dmitry Dolgov\n\n>\n> [1]\n> https://www.postgresql.org/message-id/CAFj8pRB2+pVBFsidS-AzhHdZid40OTUspWfXS0vgahHmaWosZQ@mail.gmail.com\n>", "msg_date": "Mon, 31 Oct 2022 21:27:21 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\n\n> Also, the API doesn't look ideal. AFAICS the only reason this function\n> doesn't\n> error out in case of ambiguous name is that transformColumnRef may check\n> if a\n> given name shadows a variable when session_variables_ambiguity_warning is\n> set.\n> But since IdentifyVariable returns InvalidOid if the given list of\n> identifiers\n> is ambiguous, it seems that the shadow detection can fail to detect a\n> shadowed\n> reference if multiple variable would shadow the name:\n>\n> # CREATE TYPE ab AS (a integer, b integer);\n> CREATE TYPE\n> # CREATE VARIABLE v_ab AS ab;\n> CREATE VARIABLE\n>\n> # CREATE TABLE v_ab (a integer, b integer);\n> CREATE TABLE\n>\n> # SET session_variables_ambiguity_warning = 1;\n> SET\n>\n> # sELECT v_ab.a FROM v_ab;\n> WARNING: 42702: session variable \"v_ab.a\" is shadowed\n> LINE 1: select v_ab.a from v_ab;\n> ^\n> DETAIL: Session variables can be shadowed by columns, routine's variables\n> and routine's arguments with the same name.\n> a\n> ---\n> (0 rows)\n>\n> # CREATE SCHEMA v_ab;\n> CREATE SCHEMA\n>\n> # CREATE VARIABLE v_ab.a AS integer;\n> CREATE VARIABLE\n>\n> # SELECT v_ab.a FROM v_ab;\n> a\n> ---\n> (0 rows)\n>\n>\n> Note that a bit later in transformColumnRef(), not_unique is checked only\n> if\n> the returned varid is valid, which isn't correct as InvalidOid is currently\n> returned if not_unique is set.\n>\n> I think that the error should be raised in IdentifyVariable rather than\n> having\n> every caller check it. I'm not sure how to perfectly handle the\n> session_variables_ambiguity_warning though. Maybe make not_unique\n> optional,\n> and error out if not_unique is null. If not null, set it as necessary and\n> return one of the oid. The only use would be for shadowing detection, and\n> in\n> that case it won't be possible to check if a warning can be avoided as it\n> would\n> be if no amgibuity is found, but that's probably ok.\n>\n\ndone\n\nI partially rewrote the IdentifyVariable routine. Now it should be robust.\n\n\n\n>\n> Or maybe instead LookupVariable should have an extra argument to only match\n> variable with a composite type if caller asks to. This would avoid\n> scenarios\n> like:\n>\n> CREATE VARIABLE myvar AS int;\n> SELECT myvar.blabla;\n> ERROR: 42809: type integer is not composite\n>\n> Is that really ok to match a variable here rather than complaining about a\n> missing FROM-clause?\n>\n\nI feel \"missing FROM-clause\" is a little bit better, although the message\n\"type integer is not composite\" is correct too. But there is agreement so\nimplementation of session variables should minimize impacts on PostgreSQL\nbehaviour, and it is more comfortant with some filtering used in other\nplaces.\n\n\n\n>\n> + indirection_start = list_length(names) - (attrname ? 1 : 0);\n> + indirection = list_copy_tail(stmt->target, indirection_start);\n> + [...]\n> + if (indirection != NULL)\n> + {\n> + bool targetIsArray;\n> + char *targetName;\n> +\n> + targetName = get_session_variable_name(varid);\n> + targetIsArray =\n> OidIsValid(get_element_type(typid));\n> +\n> + pstate->p_hasSessionVariables = true;\n> +\n> + coerced_expr = (Expr *)\n> + transformAssignmentIndirection(pstate,\n> +\n> (Node *) param,\n> +\n> targetName,\n> +\n> targetIsArray,\n> +\n> typid,\n> +\n> typmod,\n> +\n> InvalidOid,\n> +\n> indirection,\n> +\n> list_head(indirection),\n> +\n> (Node *) expr,\n> +\n> COERCION_PLPGSQL,\n> +\n> stmt->location);\n> + }\n>\n> I'm not sure why you use this approach rather than just having something\n> like\n> \"ListCell *indirection_head\", set it to a non-NULL value when needed, and\n> use\n> that (with names) instead. Note that it's also not correct to compare a\n> List\n> to NULL, use NIL instead.\n>\n\nchanged, fixed\n\n\n>\n> - expr_kind_allows_session_variables\n>\n> Even if that's a bit annoying, I think it's better to explicitly put all\n> values\n> there rather than having a default clause.\n>\n> For instance, EXPR_KIND_CYCLE_MARK is currently allowing session variables,\n> which doesn't look ok. It's probably just an error from when the patchset\n> was\n> rebased, but this probably wouldn't happen if you get an error for an\n> unmatched\n> value if you add a new expr kind (which doesn't happen that often).\n>\n\ndone\n\nupdated patch assigned\n\nRegards\n\nPavel\n\n\n>\n> [1]\n> https://www.postgresql.org/message-id/CAFj8pRB2+pVBFsidS-AzhHdZid40OTUspWfXS0vgahHmaWosZQ@mail.gmail.com\n>", "msg_date": "Thu, 3 Nov 2022 16:48:43 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nfix clang warning\n\nRegards\n\nPavel", "msg_date": "Fri, 4 Nov 2022 05:58:06 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "> On Fri, Nov 04, 2022 at 05:58:06AM +0100, Pavel Stehule wrote:\n> Hi\n>\n> fix clang warning\n\nI've stumbled upon something that looks weird to me (inspired by the\nexample from tests):\n\n =# create variable v2 as int;\n =# let v2 = 3;\n =# create view vv2 as select coalesce(v2, 0) + 1000 as result\n\n =# select * from vv2;\n result\n --------\n 1003\n\n =# set force_parallel_mode to on;\n =# select * from vv2;\n result\n --------\n 1000\n\nIn the second select the actual work is done from a worker backend.\nSince values of session variables are stored in the backend local\nmemory, it's not being shared with the worker and the value is not found\nin the hash map. Does this suppose to be like that?\n\n\n", "msg_date": "Fri, 4 Nov 2022 15:07:48 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nOn Fri, Nov 04, 2022 at 03:07:48PM +0100, Dmitry Dolgov wrote:\n> > On Fri, Nov 04, 2022 at 05:58:06AM +0100, Pavel Stehule wrote:\n> > Hi\n> >\n> > fix clang warning\n>\n> I've stumbled upon something that looks weird to me (inspired by the\n> example from tests):\n>\n> =# create variable v2 as int;\n> =# let v2 = 3;\n> =# create view vv2 as select coalesce(v2, 0) + 1000 as result\n>\n> =# select * from vv2;\n> result\n> --------\n> 1003\n>\n> =# set force_parallel_mode to on;\n> =# select * from vv2;\n> result\n> --------\n> 1000\n>\n> In the second select the actual work is done from a worker backend.\n> Since values of session variables are stored in the backend local\n> memory, it's not being shared with the worker and the value is not found\n> in the hash map. Does this suppose to be like that?\n\nThere's code to serialize and restore all used variables for parallel workers\n(see code about PARAM_VARIABLE and queryDesc->num_session_variables /\nqueryDesc->plannedstmt->sessionVariables). I haven't reviewed that part yet,\nbut it's supposed to be working. Blind guess would be that it's missing\nsomething in expression walker.\n\n\n", "msg_date": "Fri, 4 Nov 2022 22:17:13 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\n\npá 4. 11. 2022 v 15:08 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\nnapsal:\n\n> > On Fri, Nov 04, 2022 at 05:58:06AM +0100, Pavel Stehule wrote:\n> > Hi\n> >\n> > fix clang warning\n>\n> I've stumbled upon something that looks weird to me (inspired by the\n> example from tests):\n>\n> =# create variable v2 as int;\n> =# let v2 = 3;\n> =# create view vv2 as select coalesce(v2, 0) + 1000 as result\n>\n> =# select * from vv2;\n> result\n> --------\n> 1003\n>\n> =# set force_parallel_mode to on;\n> =# select * from vv2;\n> result\n> --------\n> 1000\n>\n> In the second select the actual work is done from a worker backend.\n> Since values of session variables are stored in the backend local\n> memory, it's not being shared with the worker and the value is not found\n> in the hash map. Does this suppose to be like that?\n>\n\nIt looks like a bug, but parallel queries should be supported.\n\nThe value of the variable is passed as parameter to the worker backend. But\nprobably somewhere the original reference was not replaced by parameter\n\nHipá 4. 11. 2022 v 15:08 odesílatel Dmitry Dolgov <9erthalion6@gmail.com> napsal:> On Fri, Nov 04, 2022 at 05:58:06AM +0100, Pavel Stehule wrote:\n> Hi\n>\n> fix clang warning\n\nI've stumbled upon something that looks weird to me (inspired by the\nexample from tests):\n\n    =# create variable v2 as int;\n    =# let v2 = 3;\n    =# create view vv2 as select coalesce(v2, 0) + 1000 as result\n\n    =# select * from vv2;\n     result\n     --------\n        1003\n\n    =# set force_parallel_mode to on;\n    =# select * from vv2;\n     result\n     --------\n        1000\n\nIn the second select the actual work is done from a worker backend.\nSince values of session variables are stored in the backend local\nmemory, it's not being shared with the worker and the value is not found\nin the hash map. Does this suppose to be like that?It looks like a bug, but parallel queries should be supported.The value of the variable is passed as parameter to the worker backend. But probably somewhere the original reference was not replaced by parameter", "msg_date": "Fri, 4 Nov 2022 15:17:18 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "> On Fri, Nov 04, 2022 at 03:17:18PM +0100, Pavel Stehule wrote:\n> > I've stumbled upon something that looks weird to me (inspired by the\n> > example from tests):\n> >\n> > =# create variable v2 as int;\n> > =# let v2 = 3;\n> > =# create view vv2 as select coalesce(v2, 0) + 1000 as result\n> >\n> > =# select * from vv2;\n> > result\n> > --------\n> > 1003\n> >\n> > =# set force_parallel_mode to on;\n> > =# select * from vv2;\n> > result\n> > --------\n> > 1000\n> >\n> > In the second select the actual work is done from a worker backend.\n> > Since values of session variables are stored in the backend local\n> > memory, it's not being shared with the worker and the value is not found\n> > in the hash map. Does this suppose to be like that?\n> >\n>\n> It looks like a bug, but parallel queries should be supported.\n>\n> The value of the variable is passed as parameter to the worker backend. But\n> probably somewhere the original reference was not replaced by parameter\n>\n> On Fri, Nov 04, 2022 at 10:17:13PM +0800, Julien Rouhaud wrote:\n> Hi,\n>\n> There's code to serialize and restore all used variables for parallel workers\n> (see code about PARAM_VARIABLE and queryDesc->num_session_variables /\n> queryDesc->plannedstmt->sessionVariables). I haven't reviewed that part yet,\n> but it's supposed to be working. Blind guess would be that it's missing\n> something in expression walker.\n\nI see, thanks. I'll take a deeper look, my initial assumption was due to\nthe fact that in the worker case create_sessionvars_hashtables is\ngetting called for every query.\n\n\n", "msg_date": "Fri, 4 Nov 2022 15:28:42 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nI did a quick initial review of this patch series - attached is a\nversion with \"review\" commits for some of the parts. The current patch\nseems in pretty good shape, most of what I noticed are minor issues. I\nplan to do a more thorough review later.\n\nA quick overview of the issues:\n\n0001\n----\n\n- AtPreEOXact_SessionVariable_on_xact_actions name seems unnecessarily\ncomplicated and redundant, and mismatching nearby functions. Why not\ncall it AtEOXact_SessionVariable, similar to AtEOXact_LargeObject?\n\n- some whitespace / ordering cleanup\n\n- I'm not sure why find_composite_type_dependencies needs the extra\n\"else if\" branch (instead of just doing \"if\" as before)\n\n- NamesFromList and IdentifyVariable seem introduced unnecessarily\nearly, as they are only used in 0002 and 0003 parts (in the original\npatch series). Not sure if the plan is to squash everything into a\nsingle patch, or commit individual patches.\n\n- AFAIK patches don't need to modify typedefs.list.\n\n\n0002\n----\n\n- some whitespace / ordering cleanup\n\n- moving setting hasSessionVariables right after similar fields\n\n- SessionVariableCreatePostprocess prototype is redundant (2x)\n\n- I'd probably rename pg_debug_show_used_session_variables to\npg_session_variables (assuming we want to keep this view)\n\n\n0003\n----\n\n- I'd rename svariableState to SVariableState, to keep the naming\nconsistent with other similar/related typedefs.\n\n- some whitespace / ordering cleanup\n\n\n0007\n----\n\n- minor wording change\n\n\nAside from that, I tried running this under valgrind, and that produces\nthis report:\n\n==250595== Conditional jump or move depends on uninitialised value(s)\n==250595== at 0x731A48: sync_sessionvars_all (session_variable.c:513)\n==250595== by 0x7321A6: prepare_variable_for_reading\n(session_variable.c:727)\n==250595== by 0x7320BA: CopySessionVariable (session_variable.c:898)\n==250595== by 0x7BC3BF: standard_ExecutorStart (execMain.c:252)\n==250595== by 0x7BC042: ExecutorStart (execMain.c:146)\n==250595== by 0xA89283: PortalStart (pquery.c:520)\n==250595== by 0xA84E8D: exec_simple_query (postgres.c:1199)\n==250595== by 0xA8425B: PostgresMain (postgres.c:4551)\n==250595== by 0x998B03: BackendRun (postmaster.c:4482)\n==250595== by 0x9980EC: BackendStartup (postmaster.c:4210)\n==250595== by 0x996F0D: ServerLoop (postmaster.c:1804)\n==250595== by 0x9948CA: PostmasterMain (postmaster.c:1476)\n==250595== by 0x8526B6: main (main.c:197)\n==250595== Uninitialised value was created by a heap allocation\n==250595== at 0xCD86F0: MemoryContextAllocExtended (mcxt.c:1138)\n==250595== by 0xC9FA1F: DynaHashAlloc (dynahash.c:292)\n==250595== by 0xC9FEC1: element_alloc (dynahash.c:1715)\n==250595== by 0xCA102A: get_hash_entry (dynahash.c:1324)\n==250595== by 0xCA0879: hash_search_with_hash_value (dynahash.c:1097)\n==250595== by 0xCA0432: hash_search (dynahash.c:958)\n==250595== by 0x731614: SetSessionVariable (session_variable.c:846)\n==250595== by 0x82FEED: svariableReceiveSlot (svariableReceiver.c:138)\n==250595== by 0x7BD277: ExecutePlan (execMain.c:1726)\n==250595== by 0x7BD0C5: standard_ExecutorRun (execMain.c:422)\n==250595== by 0x7BCE63: ExecutorRun (execMain.c:366)\n==250595== by 0x7332F0: ExecuteLetStmt (session_variable.c:1310)\n==250595== by 0xA8CC15: standard_ProcessUtility (utility.c:1076)\n==250595== by 0xA8BC72: ProcessUtility (utility.c:533)\n==250595== by 0xA8B2B9: PortalRunUtility (pquery.c:1161)\n==250595== by 0xA8A454: PortalRunMulti (pquery.c:1318)\n==250595== by 0xA89A16: PortalRun (pquery.c:794)\n==250595== by 0xA84F9E: exec_simple_query (postgres.c:1238)\n==250595== by 0xA8425B: PostgresMain (postgres.c:4551)\n==250595== by 0x998B03: BackendRun (postmaster.c:4482)\n==250595==\n\nWhich I think means this:\n\n if (filter_lxid && svar->drop_lxid == MyProc->lxid)\n continue;\n\naccesses drop_lxid, which was not initialized in init_session_variable.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sat, 5 Nov 2022 17:04:31 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nOn Sat, Nov 05, 2022 at 05:04:31PM +0100, Tomas Vondra wrote:\n>\n> I did a quick initial review of this patch series - attached is a\n> version with \"review\" commits for some of the parts. The current patch\n> seems in pretty good shape, most of what I noticed are minor issues. I\n> plan to do a more thorough review later.\n\nThanks!\n\nI agree with all of your comments, just a few answers below\n\n> - NamesFromList and IdentifyVariable seem introduced unnecessarily\n> early, as they are only used in 0002 and 0003 parts (in the original\n> patch series). Not sure if the plan is to squash everything into a\n> single patch, or commit individual patches.\n\nThe split was mostly done to make the patch easier to review, as it adds quite\na bit of infrastructure.\n\nThere have been some previous comments to have a more logical separation and\nfix similar issues, but there are still probably other oddities like that\nlaying around. I personally didn't focus much on it as I don't know if the\nfuture committer will choose to squash everything or not.\n\n> - AFAIK patches don't need to modify typedefs.list.\n\nI think this was discussed a year or so ago, and my understanding is that the\ngeneral rule is that it's now welcome, if not recommended, to maintain\ntypedefs.list in each patchset.\n\n> Which I think means this:\n>\n> if (filter_lxid && svar->drop_lxid == MyProc->lxid)\n> continue;\n>\n> accesses drop_lxid, which was not initialized in init_session_variable.\n\nAgreed.\n\n\n", "msg_date": "Sun, 6 Nov 2022 01:25:09 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "pá 4. 11. 2022 v 15:28 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\nnapsal:\n\n> > On Fri, Nov 04, 2022 at 03:17:18PM +0100, Pavel Stehule wrote:\n> > > I've stumbled upon something that looks weird to me (inspired by the\n> > > example from tests):\n> > >\n> > > =# create variable v2 as int;\n> > > =# let v2 = 3;\n> > > =# create view vv2 as select coalesce(v2, 0) + 1000 as result\n> > >\n> > > =# select * from vv2;\n> > > result\n> > > --------\n> > > 1003\n> > >\n> > > =# set force_parallel_mode to on;\n> > > =# select * from vv2;\n> > > result\n> > > --------\n> > > 1000\n> > >\n> > > In the second select the actual work is done from a worker backend.\n> > > Since values of session variables are stored in the backend local\n> > > memory, it's not being shared with the worker and the value is not\n> found\n> > > in the hash map. Does this suppose to be like that?\n> > >\n> >\n> > It looks like a bug, but parallel queries should be supported.\n> >\n> > The value of the variable is passed as parameter to the worker backend.\n> But\n> > probably somewhere the original reference was not replaced by parameter\n> >\n> > On Fri, Nov 04, 2022 at 10:17:13PM +0800, Julien Rouhaud wrote:\n> > Hi,\n> >\n> > There's code to serialize and restore all used variables for parallel\n> workers\n> > (see code about PARAM_VARIABLE and queryDesc->num_session_variables /\n> > queryDesc->plannedstmt->sessionVariables). I haven't reviewed that part\n> yet,\n> > but it's supposed to be working. Blind guess would be that it's missing\n> > something in expression walker.\n>\n> I see, thanks. I'll take a deeper look, my initial assumption was due to\n> the fact that in the worker case create_sessionvars_hashtables is\n> getting called for every query.\n>\n\nIt should be fixed in today's patch\n\nThe problem was in missing pushing the hasSessionVariables flag to the\nupper subquery in pull_up_simple_subquery.\n\n--- a/src/backend/optimizer/prep/prepjointree.c\n+++ b/src/backend/optimizer/prep/prepjointree.c\n@@ -1275,6 +1275,9 @@ pull_up_simple_subquery(PlannerInfo *root, Node\n*jtnode, RangeTblEntry *rte,\n /* If subquery had any RLS conditions, now main query does too */\n parse->hasRowSecurity |= subquery->hasRowSecurity;\n\n+ /* If subquery had session variables, now main query does too */\n+ parse->hasSessionVariables |= subquery->hasSessionVariables;\n+\n\nThank you for the check and bug report. Your example was added to regress\ntests\n\nRegards\n\nPavel\n\npá 4. 11. 2022 v 15:28 odesílatel Dmitry Dolgov <9erthalion6@gmail.com> napsal:> On Fri, Nov 04, 2022 at 03:17:18PM +0100, Pavel Stehule wrote:\n> > I've stumbled upon something that looks weird to me (inspired by the\n> > example from tests):\n> >\n> >     =# create variable v2 as int;\n> >     =# let v2 = 3;\n> >     =# create view vv2 as select coalesce(v2, 0) + 1000 as result\n> >\n> >     =# select * from vv2;\n> >      result\n> >      --------\n> >         1003\n> >\n> >     =# set force_parallel_mode to on;\n> >     =# select * from vv2;\n> >      result\n> >      --------\n> >         1000\n> >\n> > In the second select the actual work is done from a worker backend.\n> > Since values of session variables are stored in the backend local\n> > memory, it's not being shared with the worker and the value is not found\n> > in the hash map. Does this suppose to be like that?\n> >\n>\n> It looks like a bug, but parallel queries should be supported.\n>\n> The value of the variable is passed as parameter to the worker backend. But\n> probably somewhere the original reference was not replaced by parameter\n>\n> On Fri, Nov 04, 2022 at 10:17:13PM +0800, Julien Rouhaud wrote:\n> Hi,\n>\n> There's code to serialize and restore all used variables for parallel workers\n> (see code about PARAM_VARIABLE and queryDesc->num_session_variables /\n> queryDesc->plannedstmt->sessionVariables).  I haven't reviewed that part yet,\n> but it's supposed to be working.  Blind guess would be that it's missing\n> something in expression walker.\n\nI see, thanks. I'll take a deeper look, my initial assumption was due to\nthe fact that in the worker case create_sessionvars_hashtables is\ngetting called for every query.It should be fixed in today's patchThe problem was in missing pushing the hasSessionVariables flag to the upper subquery in pull_up_simple_subquery.--- a/src/backend/optimizer/prep/prepjointree.c+++ b/src/backend/optimizer/prep/prepjointree.c@@ -1275,6 +1275,9 @@ pull_up_simple_subquery(PlannerInfo *root, Node *jtnode, RangeTblEntry *rte,    /* If subquery had any RLS conditions, now main query does too */    parse->hasRowSecurity |= subquery->hasRowSecurity; +   /* If subquery had session variables, now main query does too */+   parse->hasSessionVariables |= subquery->hasSessionVariables;+Thank you for the check and bug report. Your example was added to regress testsRegardsPavel", "msg_date": "Sun, 13 Nov 2022 15:58:34 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "so 5. 11. 2022 v 17:04 odesílatel Tomas Vondra <\ntomas.vondra@enterprisedb.com> napsal:\n\n> Hi,\n>\n> I did a quick initial review of this patch series - attached is a\n> version with \"review\" commits for some of the parts. The current patch\n> seems in pretty good shape, most of what I noticed are minor issues. I\n> plan to do a more thorough review later.\n>\n> A quick overview of the issues:\n>\n> 0001\n> ----\n>\n> - AtPreEOXact_SessionVariable_on_xact_actions name seems unnecessarily\n> complicated and redundant, and mismatching nearby functions. Why not\n> call it AtEOXact_SessionVariable, similar to AtEOXact_LargeObject?\n>\n\nrenamed\n\n\n>\n> - some whitespace / ordering cleanup\n>\n> - I'm not sure why find_composite_type_dependencies needs the extra\n> \"else if\" branch (instead of just doing \"if\" as before)\n>\n\nyes, it was not necessary\n\n\n> - NamesFromList and IdentifyVariable seem introduced unnecessarily\n> early, as they are only used in 0002 and 0003 parts (in the original\n> patch series). Not sure if the plan is to squash everything into a\n> single patch, or commit individual patches.\n>\n\nmoved to 0002\n\n\n>\n> - AFAIK patches don't need to modify typedefs.list.\n>\n>\n> 0002\n> ----\n>\n> - some whitespace / ordering cleanup\n>\n> - moving setting hasSessionVariables right after similar fields\n>\n\nfixed\n\n\n>\n> - SessionVariableCreatePostprocess prototype is redundant (2x)\n>\n\nremoved\n\n\n>\n> - I'd probably rename pg_debug_show_used_session_variables to\n> pg_session_variables (assuming we want to keep this view)\n>\n\nrenamed\n\n\n>\n>\n> 0003\n> ----\n>\n> - I'd rename svariableState to SVariableState, to keep the naming\n> consistent with other similar/related typedefs.\n>\n\nrenamed\n\n\n> - some whitespace / ordering cleanup\n>\n>\n> 0007\n> ----\n>\n> - minor wording change\n>\n\nfixed\n\n>\n>\n> Aside from that, I tried running this under valgrind, and that produces\n> this report:\n>\n> ==250595== Conditional jump or move depends on uninitialised value(s)\n> ==250595== at 0x731A48: sync_sessionvars_all (session_variable.c:513)\n> ==250595== by 0x7321A6: prepare_variable_for_reading\n> (session_variable.c:727)\n> ==250595== by 0x7320BA: CopySessionVariable (session_variable.c:898)\n> ==250595== by 0x7BC3BF: standard_ExecutorStart (execMain.c:252)\n> ==250595== by 0x7BC042: ExecutorStart (execMain.c:146)\n> ==250595== by 0xA89283: PortalStart (pquery.c:520)\n> ==250595== by 0xA84E8D: exec_simple_query (postgres.c:1199)\n> ==250595== by 0xA8425B: PostgresMain (postgres.c:4551)\n> ==250595== by 0x998B03: BackendRun (postmaster.c:4482)\n> ==250595== by 0x9980EC: BackendStartup (postmaster.c:4210)\n> ==250595== by 0x996F0D: ServerLoop (postmaster.c:1804)\n> ==250595== by 0x9948CA: PostmasterMain (postmaster.c:1476)\n> ==250595== by 0x8526B6: main (main.c:197)\n> ==250595== Uninitialised value was created by a heap allocation\n> ==250595== at 0xCD86F0: MemoryContextAllocExtended (mcxt.c:1138)\n> ==250595== by 0xC9FA1F: DynaHashAlloc (dynahash.c:292)\n> ==250595== by 0xC9FEC1: element_alloc (dynahash.c:1715)\n> ==250595== by 0xCA102A: get_hash_entry (dynahash.c:1324)\n> ==250595== by 0xCA0879: hash_search_with_hash_value (dynahash.c:1097)\n> ==250595== by 0xCA0432: hash_search (dynahash.c:958)\n> ==250595== by 0x731614: SetSessionVariable (session_variable.c:846)\n> ==250595== by 0x82FEED: svariableReceiveSlot (svariableReceiver.c:138)\n> ==250595== by 0x7BD277: ExecutePlan (execMain.c:1726)\n> ==250595== by 0x7BD0C5: standard_ExecutorRun (execMain.c:422)\n> ==250595== by 0x7BCE63: ExecutorRun (execMain.c:366)\n> ==250595== by 0x7332F0: ExecuteLetStmt (session_variable.c:1310)\n> ==250595== by 0xA8CC15: standard_ProcessUtility (utility.c:1076)\n> ==250595== by 0xA8BC72: ProcessUtility (utility.c:533)\n> ==250595== by 0xA8B2B9: PortalRunUtility (pquery.c:1161)\n> ==250595== by 0xA8A454: PortalRunMulti (pquery.c:1318)\n> ==250595== by 0xA89A16: PortalRun (pquery.c:794)\n> ==250595== by 0xA84F9E: exec_simple_query (postgres.c:1238)\n> ==250595== by 0xA8425B: PostgresMain (postgres.c:4551)\n> ==250595== by 0x998B03: BackendRun (postmaster.c:4482)\n> ==250595==\n>\n> Which I think means this:\n>\n> if (filter_lxid && svar->drop_lxid == MyProc->lxid)\n> continue;\n>\n> accesses drop_lxid, which was not initialized in init_session_variable.\n>\n\nfixed\n\nThank you very much for this review.\n\nToday's patch should solve all issues reported by Tomas.\n\nRegards\n\nPavel\n\n\n\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company", "msg_date": "Sun, 13 Nov 2022 16:01:21 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nfresh rebase\n\nRegards\n\nPavel", "msg_date": "Sun, 13 Nov 2022 18:59:30 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On 13.11.2022 20:59, Pavel Stehule wrote:\n> fresh rebase\n\nHello,\n\nSorry, I haven't been following this thread, but I'd like to report a \nmemory management bug. I couldn't apply the latest patches, so I tested \nwith v20221104-1-* patches applied atop of commit b0284bfb1db.\n\n\npostgres=# create variable s text default 'abc';\n\ncreate function f() returns text as $$\nbegin\n return g(s);\nend;\n$$ language plpgsql;\n\ncreate function g(t text) returns text as $$\nbegin\n let s = 'BOOM!';\n return t;\nend;\n$$ language plpgsql;\n\nselect f();\nCREATE VARIABLE\nCREATE FUNCTION\nCREATE FUNCTION\nserver closed the connection unexpectedly\n\tThis probably means the server terminated abnormally\n\tbefore or while processing the request.\n\nLOG: server process (PID 55307) was terminated by signal 11: \nSegmentation fault\nDETAIL: Failed process was running: select f();\n\n\nI believe it's a use-after-free error, triggered by assigning a new \nvalue to s in g(), thus making t a dangling pointer.\n\nAfter reconnecting I get a scary error:\n\npostgres=# select f();\nERROR: compressed pglz data is corrupt\n\n\nBest regards,\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/\n\n\n\n", "msg_date": "Mon, 14 Nov 2022 10:00:38 +0300", "msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "po 14. 11. 2022 v 8:00 odesílatel Sergey Shinderuk <\ns.shinderuk@postgrespro.ru> napsal:\n\n> On 13.11.2022 20:59, Pavel Stehule wrote:\n> > fresh rebase\n>\n> Hello,\n>\n> Sorry, I haven't been following this thread, but I'd like to report a\n> memory management bug. I couldn't apply the latest patches, so I tested\n> with v20221104-1-* patches applied atop of commit b0284bfb1db.\n>\n>\n> postgres=# create variable s text default 'abc';\n>\n> create function f() returns text as $$\n> begin\n> return g(s);\n> end;\n> $$ language plpgsql;\n>\n> create function g(t text) returns text as $$\n> begin\n> let s = 'BOOM!';\n> return t;\n> end;\n> $$ language plpgsql;\n>\n> select f();\n> CREATE VARIABLE\n> CREATE FUNCTION\n> CREATE FUNCTION\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n>\n> LOG: server process (PID 55307) was terminated by signal 11:\n> Segmentation fault\n> DETAIL: Failed process was running: select f();\n>\n>\n> I believe it's a use-after-free error, triggered by assigning a new\n> value to s in g(), thus making t a dangling pointer.\n>\n> After reconnecting I get a scary error:\n>\n> postgres=# select f();\n> ERROR: compressed pglz data is corrupt\n>\n\nI am able to reproduce it, and I have a quick fix, but I need to\ninvestigate i this fix will be correct\n\nIt's a good example so I have to always return a copy of value.\n\nRegards\n\nPavel\n\n\n\n\n\n>\n> Best regards,\n>\n> --\n> Sergey Shinderuk https://postgrespro.com/\n>\n>\n\npo 14. 11. 2022 v 8:00 odesílatel Sergey Shinderuk <s.shinderuk@postgrespro.ru> napsal:On 13.11.2022 20:59, Pavel Stehule wrote:\n> fresh rebase\n\nHello,\n\nSorry, I haven't been following this thread, but I'd like to report a \nmemory management bug. I couldn't apply the latest patches, so I tested \nwith v20221104-1-* patches applied atop of commit b0284bfb1db.\n\n\npostgres=# create variable s text default 'abc';\n\ncreate function f() returns text as $$\nbegin\n         return g(s);\nend;\n$$ language plpgsql;\n\ncreate function g(t text) returns text as $$\nbegin\n         let s = 'BOOM!';\n         return t;\nend;\n$$ language plpgsql;\n\nselect f();\nCREATE VARIABLE\nCREATE FUNCTION\nCREATE FUNCTION\nserver closed the connection unexpectedly\n        This probably means the server terminated abnormally\n        before or while processing the request.\n\nLOG:  server process (PID 55307) was terminated by signal 11: \nSegmentation fault\nDETAIL:  Failed process was running: select f();\n\n\nI believe it's a use-after-free error, triggered by assigning a new \nvalue to s in g(), thus making t a dangling pointer.\n\nAfter reconnecting I get a scary error:\n\npostgres=# select f();\nERROR:  compressed pglz data is corruptI am able to reproduce it, and I have a quick fix, but I need to investigate i this fix will be correctIt's a good example so I have to always return a copy of value.RegardsPavel\n\n\nBest regards,\n\n-- \nSergey Shinderuk                https://postgrespro.com/", "msg_date": "Tue, 15 Nov 2022 06:00:44 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\npo 14. 11. 2022 v 8:00 odesílatel Sergey Shinderuk <\ns.shinderuk@postgrespro.ru> napsal:\n\n> On 13.11.2022 20:59, Pavel Stehule wrote:\n> > fresh rebase\n>\n> Hello,\n>\n> Sorry, I haven't been following this thread, but I'd like to report a\n> memory management bug. I couldn't apply the latest patches, so I tested\n> with v20221104-1-* patches applied atop of commit b0284bfb1db.\n>\n>\n> postgres=# create variable s text default 'abc';\n>\n> create function f() returns text as $$\n> begin\n> return g(s);\n> end;\n> $$ language plpgsql;\n>\n> create function g(t text) returns text as $$\n> begin\n> let s = 'BOOM!';\n> return t;\n> end;\n> $$ language plpgsql;\n>\n> select f();\n> CREATE VARIABLE\n> CREATE FUNCTION\n> CREATE FUNCTION\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n>\n> LOG: server process (PID 55307) was terminated by signal 11:\n> Segmentation fault\n> DETAIL: Failed process was running: select f();\n>\n\nshould be fixed now\n\nThank you for check\n\nRegards\n\nPavel\n\n\n>\n>\n> I believe it's a use-after-free error, triggered by assigning a new\n> value to s in g(), thus making t a dangling pointer.\n>\n> After reconnecting I get a scary error:\n>\n> postgres=# select f();\n> ERROR: compressed pglz data is corrupt\n>\n>\n> Best regards,\n>\n> --\n> Sergey Shinderuk https://postgrespro.com/\n>\n>", "msg_date": "Tue, 15 Nov 2022 21:22:12 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nfresh rebase\n\nfix small bug in tab-complete - READ|WRITE rights are not used, and support\nis removed now\n\nRegards\n\nPavel", "msg_date": "Fri, 18 Nov 2022 08:58:31 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nfresh rebase\n\nRegards\n\nPavel", "msg_date": "Mon, 21 Nov 2022 21:33:56 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nrebase and small refactoring in parse_relation - reduce redundant code\n\nRegards\n\nPavel", "msg_date": "Thu, 24 Nov 2022 13:16:23 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nfresh rebase\n\nRegards\n\nPavel", "msg_date": "Fri, 2 Dec 2022 21:08:38 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nfresh rebase\n\nRegards\n\nPavel", "msg_date": "Tue, 6 Dec 2022 12:16:54 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nfresh rebase\n\nregards\n\nPavel", "msg_date": "Wed, 14 Dec 2022 05:54:48 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Op 14-12-2022 om 05:54 schreef Pavel Stehule:\n> Hi\n> \n> fresh rebase\n\ntypo alert:\n\nv20221214-0003-LET-command.patch contains\n\nerrmsg(\"target session varible is of type %s\"\n\n('varible' should be 'variable')\n\nErik\n\n\n", "msg_date": "Wed, 14 Dec 2022 06:20:22 +0100", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15 (typo)" }, { "msg_contents": "Hi\n\nfix regress tests after\nhttps://github.com/postgres/postgres/commit/2af33369e7940770cb81c0a9b7d3ec874ee8cb22\n\nRegards\n\nPavel", "msg_date": "Thu, 15 Dec 2022 06:06:51 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\n\nst 14. 12. 2022 v 6:20 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n\n> Op 14-12-2022 om 05:54 schreef Pavel Stehule:\n> > Hi\n> >\n> > fresh rebase\n>\n> typo alert:\n>\n> v20221214-0003-LET-command.patch contains\n>\n> errmsg(\"target session varible is of type %s\"\n>\n> ('varible' should be 'variable')\n>\n\nshould be fixed now\n\nThank you for check\n\n\n\n>\n> Erik\n>", "msg_date": "Thu, 15 Dec 2022 07:12:59 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15 (typo)" }, { "msg_contents": "Hi,\n\nI'm continuing review the patch slowly, and have one more issue plus one\nphilosophical question.\n\nThe issue have something to do with variables invalidation. Enabling\ndebug_discard_caches = 1 (about which I've learned from this thread) and\nrunning this subset of the test suite:\n\n\tCREATE SCHEMA svartest;\n\tSET search_path = svartest;\n\tCREATE VARIABLE var3 AS int;\n\n\tCREATE OR REPLACE FUNCTION inc(int)\n\tRETURNS int AS $$\n\tBEGIN\n\t LET svartest.var3 = COALESCE(svartest.var3 + $1, $1);\n\t RETURN var3;\n\tEND;\n\t$$ LANGUAGE plpgsql;\n\n\tSELECT inc(1);\n\tSELECT inc(1);\n\tSELECT inc(1);\n\ncrashes in my setup like this:\n\n\t#2 0x0000000000b432d4 in ExceptionalCondition (conditionName=0xce9b99 \"n >= 0 && n < list->length\", fileName=0xce9c18 \"list.c\", lineNumber=770) at assert.c:66\n\t#3 0x00000000007d3acd in list_delete_nth_cell (list=0x18ab248, n=-3388) at list.c:770\n\t#4 0x00000000007d3b88 in list_delete_cell (list=0x18ab248, cell=0x18dc028) at list.c:842\n\t#5 0x00000000006bcb52 in sync_sessionvars_all (filter_lxid=true) at session_variable.c:524\n\t#6 0x00000000006bd4cb in SetSessionVariable (varid=16386, value=2, isNull=false) at session_variable.c:844\n\t#7 0x00000000006bd617 in SetSessionVariableWithSecurityCheck (varid=16386, value=2, isNull=false) at session_variable.c:885\n\t#8 0x00007f763b890698 in exec_stmt_let (estate=0x7ffcc6fd5190, stmt=0x18aa920) at pl_exec.c:5030\n\t#9 0x00007f763b88a746 in exec_stmts (estate=0x7ffcc6fd5190, stmts=0x18aaaa0) at pl_exec.c:2116\n\t#10 0x00007f763b88a247 in exec_stmt_block (estate=0x7ffcc6fd5190, block=0x18aabf8) at pl_exec.c:1935\n\t#11 0x00007f763b889a49 in exec_toplevel_block (estate=0x7ffcc6fd5190, block=0x18aabf8) at pl_exec.c:1626\n\t#12 0x00007f763b8879df in plpgsql_exec_function (func=0x18781b0, fcinfo=0x18be110, simple_eval_estate=0x0, simple_eval_resowner=0x0, procedure_resowner=0x0, atomic=true) at pl_exec.c:615\n\t#13 0x00007f763b8a2320 in plpgsql_call_handler (fcinfo=0x18be110) at pl_handler.c:277\n\t#14 0x0000000000721716 in ExecInterpExpr (state=0x18bde28, econtext=0x18bd3d0, isnull=0x7ffcc6fd56d7) at execExprInterp.c:730\n\t#15 0x0000000000723642 in ExecInterpExprStillValid (state=0x18bde28, econtext=0x18bd3d0, isNull=0x7ffcc6fd56d7) at execExprInterp.c:1855\n\t#16 0x000000000077a78b in ExecEvalExprSwitchContext (state=0x18bde28, econtext=0x18bd3d0, isNull=0x7ffcc6fd56d7) at ../../../src/include/executor/executor.h:344\n\t#17 0x000000000077a7f4 in ExecProject (projInfo=0x18bde20) at ../../../src/include/executor/executor.h:378\n\t#18 0x000000000077a9dc in ExecResult (pstate=0x18bd2c0) at nodeResult.c:136\n\t#19 0x0000000000738ea0 in ExecProcNodeFirst (node=0x18bd2c0) at execProcnode.c:464\n\t#20 0x000000000072c6e3 in ExecProcNode (node=0x18bd2c0) at ../../../src/include/executor/executor.h:262\n\t#21 0x000000000072f426 in ExecutePlan (estate=0x18bd098, planstate=0x18bd2c0, use_parallel_mode=false, operation=CMD_SELECT, sendTuples=true, numberTuples=0, direction=ForwardScanDirection, dest=0x18b3eb8, execute_once=true) at execMain.c:1691\n\t#22 0x000000000072cf76 in standard_ExecutorRun (queryDesc=0x189c688, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:423\n\t#23 0x000000000072cdb3 in ExecutorRun (queryDesc=0x189c688, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:367\n\t#24 0x000000000099afdc in PortalRunSelect (portal=0x1866648, forward=true, count=0, dest=0x18b3eb8) at pquery.c:927\n\t#25 0x000000000099ac99 in PortalRun (portal=0x1866648, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x18b3eb8, altdest=0x18b3eb8, qc=0x7ffcc6fd5a70) at pquery.c:771\n\t#26 0x000000000099487d in exec_simple_query (query_string=0x17edcc8 \"SELECT inc(1);\") at postgres.c:1238\n\nIt seems that sync_sessionvars_all tries to remove a cell that either doesn't\nbelong to the xact_recheck_varids or weird in some other way:\n\n\t+>>> p l - xact_recheck_varids->elements\n\t$81 = -3388\n\nThe second thing I wanted to ask about is a more strategical question. Does\nanyone have clear understanding where this patch is going? The thread is quite\nlarge, and it's hard to catch up with all the details -- it would be great if\nsomeone could summarize the current state of things, are there any outstanding\ndesign issues or not addressed concerns?\n\n From the first look it seems some major topics the discussion is evolving are about:\n\n* Validity of the use case. Seems to be quite convincingly addressed in [1] and\n[2].\n\n* Complicated logic around invalidation, concurrent create/drop etc. (I guess\nthe issue above is falling into the same category).\n\n* Concerns that session variables could repeat some problems of temporary tables.\n\nIs there anything else?\n\n[1]: https://www.postgresql.org/message-id/CAFj8pRBT-bRQJBqkzon7tHcoFZ1byng06peZfZa0G72z46YFxg%40mail.gmail.com\n[2]: https://www.postgresql.org/message-id/flat/CAFj8pRBHSAHdainq5tRhN2Nns62h9-SZi0pvNq9DTe0VM6M1%3Dg%40mail.gmail.com#4faccb978d60e9b0b5d920e1a8a05bbf\n\n\n", "msg_date": "Thu, 22 Dec 2022 17:15:54 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15 (typo)" }, { "msg_contents": "čt 22. 12. 2022 v 17:15 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\nnapsal:\n\n> Hi,\n>\n> I'm continuing review the patch slowly, and have one more issue plus one\n> philosophical question.\n>\n> The issue have something to do with variables invalidation. Enabling\n> debug_discard_caches = 1 (about which I've learned from this thread) and\n> running this subset of the test suite:\n>\n> CREATE SCHEMA svartest;\n> SET search_path = svartest;\n> CREATE VARIABLE var3 AS int;\n>\n> CREATE OR REPLACE FUNCTION inc(int)\n> RETURNS int AS $$\n> BEGIN\n> LET svartest.var3 = COALESCE(svartest.var3 + $1, $1);\n> RETURN var3;\n> END;\n> $$ LANGUAGE plpgsql;\n>\n> SELECT inc(1);\n> SELECT inc(1);\n> SELECT inc(1);\n>\n> crashes in my setup like this:\n>\n> #2 0x0000000000b432d4 in ExceptionalCondition\n> (conditionName=0xce9b99 \"n >= 0 && n < list->length\", fileName=0xce9c18\n> \"list.c\", lineNumber=770) at assert.c:66\n> #3 0x00000000007d3acd in list_delete_nth_cell (list=0x18ab248,\n> n=-3388) at list.c:770\n> #4 0x00000000007d3b88 in list_delete_cell (list=0x18ab248,\n> cell=0x18dc028) at list.c:842\n> #5 0x00000000006bcb52 in sync_sessionvars_all (filter_lxid=true)\n> at session_variable.c:524\n> #6 0x00000000006bd4cb in SetSessionVariable (varid=16386,\n> value=2, isNull=false) at session_variable.c:844\n> #7 0x00000000006bd617 in SetSessionVariableWithSecurityCheck\n> (varid=16386, value=2, isNull=false) at session_variable.c:885\n> #8 0x00007f763b890698 in exec_stmt_let (estate=0x7ffcc6fd5190,\n> stmt=0x18aa920) at pl_exec.c:5030\n> #9 0x00007f763b88a746 in exec_stmts (estate=0x7ffcc6fd5190,\n> stmts=0x18aaaa0) at pl_exec.c:2116\n> #10 0x00007f763b88a247 in exec_stmt_block (estate=0x7ffcc6fd5190,\n> block=0x18aabf8) at pl_exec.c:1935\n> #11 0x00007f763b889a49 in exec_toplevel_block\n> (estate=0x7ffcc6fd5190, block=0x18aabf8) at pl_exec.c:1626\n> #12 0x00007f763b8879df in plpgsql_exec_function (func=0x18781b0,\n> fcinfo=0x18be110, simple_eval_estate=0x0, simple_eval_resowner=0x0,\n> procedure_resowner=0x0, atomic=true) at pl_exec.c:615\n> #13 0x00007f763b8a2320 in plpgsql_call_handler (fcinfo=0x18be110)\n> at pl_handler.c:277\n> #14 0x0000000000721716 in ExecInterpExpr (state=0x18bde28,\n> econtext=0x18bd3d0, isnull=0x7ffcc6fd56d7) at execExprInterp.c:730\n> #15 0x0000000000723642 in ExecInterpExprStillValid\n> (state=0x18bde28, econtext=0x18bd3d0, isNull=0x7ffcc6fd56d7) at\n> execExprInterp.c:1855\n> #16 0x000000000077a78b in ExecEvalExprSwitchContext\n> (state=0x18bde28, econtext=0x18bd3d0, isNull=0x7ffcc6fd56d7) at\n> ../../../src/include/executor/executor.h:344\n> #17 0x000000000077a7f4 in ExecProject (projInfo=0x18bde20) at\n> ../../../src/include/executor/executor.h:378\n> #18 0x000000000077a9dc in ExecResult (pstate=0x18bd2c0) at\n> nodeResult.c:136\n> #19 0x0000000000738ea0 in ExecProcNodeFirst (node=0x18bd2c0) at\n> execProcnode.c:464\n> #20 0x000000000072c6e3 in ExecProcNode (node=0x18bd2c0) at\n> ../../../src/include/executor/executor.h:262\n> #21 0x000000000072f426 in ExecutePlan (estate=0x18bd098,\n> planstate=0x18bd2c0, use_parallel_mode=false, operation=CMD_SELECT,\n> sendTuples=true, numberTuples=0, direction=ForwardScanDirection,\n> dest=0x18b3eb8, execute_once=true) at execMain.c:1691\n> #22 0x000000000072cf76 in standard_ExecutorRun\n> (queryDesc=0x189c688, direction=ForwardScanDirection, count=0,\n> execute_once=true) at execMain.c:423\n> #23 0x000000000072cdb3 in ExecutorRun (queryDesc=0x189c688,\n> direction=ForwardScanDirection, count=0, execute_once=true) at\n> execMain.c:367\n> #24 0x000000000099afdc in PortalRunSelect (portal=0x1866648,\n> forward=true, count=0, dest=0x18b3eb8) at pquery.c:927\n> #25 0x000000000099ac99 in PortalRun (portal=0x1866648,\n> count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x18b3eb8,\n> altdest=0x18b3eb8, qc=0x7ffcc6fd5a70) at pquery.c:771\n> #26 0x000000000099487d in exec_simple_query\n> (query_string=0x17edcc8 \"SELECT inc(1);\") at postgres.c:1238\n>\n> It seems that sync_sessionvars_all tries to remove a cell that either\n> doesn't\n> belong to the xact_recheck_varids or weird in some other way:\n>\n> +>>> p l - xact_recheck_varids->elements\n> $81 = -3388\n>\n\nI am able to repeat this issue. I'll look at it.\n\n>\n> The second thing I wanted to ask about is a more strategical question. Does\n> anyone have clear understanding where this patch is going? The thread is\n> quite\n> large, and it's hard to catch up with all the details -- it would be great\n> if\n> someone could summarize the current state of things, are there any\n> outstanding\n> design issues or not addressed concerns?\n>\n\nI hope I fixed the issues founded by Julian and Tomas. Now there is not\nimplemented transaction support related to values, and I plan to implement\nthis feature in the next stage.\nIt is waiting for review.\n\n\n>\n> From the first look it seems some major topics the discussion is evolving\n> are about:\n>\n> * Validity of the use case. Seems to be quite convincingly addressed in\n> [1] and\n> [2].\n>\n> * Complicated logic around invalidation, concurrent create/drop etc. (I\n> guess\n> the issue above is falling into the same category).\n>\n> * Concerns that session variables could repeat some problems of temporary\n> tables.\n>\n\nWhy do you think so? The variable has no mvcc support - it is just stored\nvalue with local visibility without mvcc support. There can be little bit\nsimilar issues like with global temporary tables.\n\n\n\n>\n> Is there anything else?\n>\n> [1]:\n> https://www.postgresql.org/message-id/CAFj8pRBT-bRQJBqkzon7tHcoFZ1byng06peZfZa0G72z46YFxg%40mail.gmail.com\n> [2]:\n> https://www.postgresql.org/message-id/flat/CAFj8pRBHSAHdainq5tRhN2Nns62h9-SZi0pvNq9DTe0VM6M1%3Dg%40mail.gmail.com#4faccb978d60e9b0b5d920e1a8a05bbf\n>\n\nčt 22. 12. 2022 v 17:15 odesílatel Dmitry Dolgov <9erthalion6@gmail.com> napsal:Hi,\n\nI'm continuing review the patch slowly, and have one more issue plus one\nphilosophical question.\n\nThe issue have something to do with variables invalidation. Enabling\ndebug_discard_caches = 1 (about which I've learned from this thread) and\nrunning this subset of the test suite:\n\n        CREATE SCHEMA svartest;\n        SET search_path = svartest;\n        CREATE VARIABLE var3 AS int;\n\n        CREATE OR REPLACE FUNCTION inc(int)\n        RETURNS int AS $$\n        BEGIN\n          LET svartest.var3 = COALESCE(svartest.var3 + $1, $1);\n          RETURN var3;\n        END;\n        $$ LANGUAGE plpgsql;\n\n        SELECT inc(1);\n        SELECT inc(1);\n        SELECT inc(1);\n\ncrashes in my setup like this:\n\n        #2  0x0000000000b432d4 in ExceptionalCondition (conditionName=0xce9b99 \"n >= 0 && n < list->length\", fileName=0xce9c18 \"list.c\", lineNumber=770) at assert.c:66\n        #3  0x00000000007d3acd in list_delete_nth_cell (list=0x18ab248, n=-3388) at list.c:770\n        #4  0x00000000007d3b88 in list_delete_cell (list=0x18ab248, cell=0x18dc028) at list.c:842\n        #5  0x00000000006bcb52 in sync_sessionvars_all (filter_lxid=true) at session_variable.c:524\n        #6  0x00000000006bd4cb in SetSessionVariable (varid=16386, value=2, isNull=false) at session_variable.c:844\n        #7  0x00000000006bd617 in SetSessionVariableWithSecurityCheck (varid=16386, value=2, isNull=false) at session_variable.c:885\n        #8  0x00007f763b890698 in exec_stmt_let (estate=0x7ffcc6fd5190, stmt=0x18aa920) at pl_exec.c:5030\n        #9  0x00007f763b88a746 in exec_stmts (estate=0x7ffcc6fd5190, stmts=0x18aaaa0) at pl_exec.c:2116\n        #10 0x00007f763b88a247 in exec_stmt_block (estate=0x7ffcc6fd5190, block=0x18aabf8) at pl_exec.c:1935\n        #11 0x00007f763b889a49 in exec_toplevel_block (estate=0x7ffcc6fd5190, block=0x18aabf8) at pl_exec.c:1626\n        #12 0x00007f763b8879df in plpgsql_exec_function (func=0x18781b0, fcinfo=0x18be110, simple_eval_estate=0x0, simple_eval_resowner=0x0, procedure_resowner=0x0, atomic=true) at pl_exec.c:615\n        #13 0x00007f763b8a2320 in plpgsql_call_handler (fcinfo=0x18be110) at pl_handler.c:277\n        #14 0x0000000000721716 in ExecInterpExpr (state=0x18bde28, econtext=0x18bd3d0, isnull=0x7ffcc6fd56d7) at execExprInterp.c:730\n        #15 0x0000000000723642 in ExecInterpExprStillValid (state=0x18bde28, econtext=0x18bd3d0, isNull=0x7ffcc6fd56d7) at execExprInterp.c:1855\n        #16 0x000000000077a78b in ExecEvalExprSwitchContext (state=0x18bde28, econtext=0x18bd3d0, isNull=0x7ffcc6fd56d7) at ../../../src/include/executor/executor.h:344\n        #17 0x000000000077a7f4 in ExecProject (projInfo=0x18bde20) at ../../../src/include/executor/executor.h:378\n        #18 0x000000000077a9dc in ExecResult (pstate=0x18bd2c0) at nodeResult.c:136\n        #19 0x0000000000738ea0 in ExecProcNodeFirst (node=0x18bd2c0) at execProcnode.c:464\n        #20 0x000000000072c6e3 in ExecProcNode (node=0x18bd2c0) at ../../../src/include/executor/executor.h:262\n        #21 0x000000000072f426 in ExecutePlan (estate=0x18bd098, planstate=0x18bd2c0, use_parallel_mode=false, operation=CMD_SELECT, sendTuples=true, numberTuples=0, direction=ForwardScanDirection, dest=0x18b3eb8, execute_once=true) at execMain.c:1691\n        #22 0x000000000072cf76 in standard_ExecutorRun (queryDesc=0x189c688, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:423\n        #23 0x000000000072cdb3 in ExecutorRun (queryDesc=0x189c688, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:367\n        #24 0x000000000099afdc in PortalRunSelect (portal=0x1866648, forward=true, count=0, dest=0x18b3eb8) at pquery.c:927\n        #25 0x000000000099ac99 in PortalRun (portal=0x1866648, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x18b3eb8, altdest=0x18b3eb8, qc=0x7ffcc6fd5a70) at pquery.c:771\n        #26 0x000000000099487d in exec_simple_query (query_string=0x17edcc8 \"SELECT inc(1);\") at postgres.c:1238\n\nIt seems that sync_sessionvars_all tries to remove a cell that either doesn't\nbelong to the xact_recheck_varids or weird in some other way:\n\n        +>>> p l - xact_recheck_varids->elements\n        $81 = -3388I am able to repeat this issue. I'll look at it. \n\nThe second thing I wanted to ask about is a more strategical question. Does\nanyone have clear understanding where this patch is going? The thread is quite\nlarge, and it's hard to catch up with all the details -- it would be great if\nsomeone could summarize the current state of things, are there any outstanding\ndesign issues or not addressed concerns?I hope I fixed the issues founded by Julian and Tomas. Now there is not implemented transaction support related to values, and I plan to implement this feature in the next stage. It is waiting for review. \n\n From the first look it seems some major topics the discussion is evolving are about:\n\n* Validity of the use case. Seems to be quite convincingly addressed in [1] and\n[2].\n\n* Complicated logic around invalidation, concurrent create/drop etc. (I guess\nthe issue above is falling into the same category).\n\n* Concerns that session variables could repeat some problems of temporary tables.Why do you think so? The variable has no mvcc support - it is just stored value with local visibility without mvcc support. There can be little bit similar issues like with global temporary tables.  \n\nIs there anything else?\n\n[1]: https://www.postgresql.org/message-id/CAFj8pRBT-bRQJBqkzon7tHcoFZ1byng06peZfZa0G72z46YFxg%40mail.gmail.com\n[2]: https://www.postgresql.org/message-id/flat/CAFj8pRBHSAHdainq5tRhN2Nns62h9-SZi0pvNq9DTe0VM6M1%3Dg%40mail.gmail.com#4faccb978d60e9b0b5d920e1a8a05bbf", "msg_date": "Thu, 22 Dec 2022 20:45:57 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15 (typo)" }, { "msg_contents": "> On Thu, Dec 22, 2022 at 08:45:57PM +0100, Pavel Stehule wrote:\n> > From the first look it seems some major topics the discussion is evolving\n> > are about:\n> >\n> > * Validity of the use case. Seems to be quite convincingly addressed in\n> > [1] and\n> > [2].\n> >\n> > * Complicated logic around invalidation, concurrent create/drop etc. (I\n> > guess\n> > the issue above is falling into the same category).\n> >\n> > * Concerns that session variables could repeat some problems of temporary\n> > tables.\n> >\n>\n> Why do you think so? The variable has no mvcc support - it is just stored\n> value with local visibility without mvcc support. There can be little bit\n> similar issues like with global temporary tables.\n\nYeah, sorry for not being precise, I mean global temporary tables. This\nis not my analysis, I've simply picked up it was mentioned a couple of\ntimes here. The points above are not meant to serve as an objection\nagainst the patch, but rather to figure out if there are any gaps left\nto address and come up with some sort of plan with \"committed\" as a\nfinal destination.\n\n\n", "msg_date": "Thu, 22 Dec 2022 22:23:52 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15 (typo)" }, { "msg_contents": "Hi\n\nčt 22. 12. 2022 v 22:23 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\nnapsal:\n\n> > On Thu, Dec 22, 2022 at 08:45:57PM +0100, Pavel Stehule wrote:\n> > > From the first look it seems some major topics the discussion is\n> evolving\n> > > are about:\n> > >\n> > > * Validity of the use case. Seems to be quite convincingly addressed in\n> > > [1] and\n> > > [2].\n> > >\n> > > * Complicated logic around invalidation, concurrent create/drop etc. (I\n> > > guess\n> > > the issue above is falling into the same category).\n> > >\n> > > * Concerns that session variables could repeat some problems of\n> temporary\n> > > tables.\n> > >\n>\n\nI am sending an updated patch, fixing the mentioned issue. Big thanks for\ntesting, and checking.\n\n\n> >\n> > Why do you think so? The variable has no mvcc support - it is just stored\n> > value with local visibility without mvcc support. There can be little bit\n> > similar issues like with global temporary tables.\n>\n> Yeah, sorry for not being precise, I mean global temporary tables. This\n> is not my analysis, I've simply picked up it was mentioned a couple of\n> times here. The points above are not meant to serve as an objection\n> against the patch, but rather to figure out if there are any gaps left\n> to address and come up with some sort of plan with \"committed\" as a\n> final destination.\n>\n\nThere are some similarities, but there are a lot of differences too.\nHandling of metadata is partially similar, but session variable is almost\nthe value cached in session memory. It has no statistics, it is not stored\nin a file. Because there is different storage, I don't think there is some\nintersection on implementation level.\n\nRegards\n\nPavel", "msg_date": "Fri, 23 Dec 2022 08:38:43 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15 (typo)" }, { "msg_contents": "Hi,\n\nOn Fri, Dec 23, 2022 at 08:38:43AM +0100, Pavel Stehule wrote:\n>\n> I am sending an updated patch, fixing the mentioned issue. Big thanks for\n> testing, and checking.\n\nThere were multiple reviews in the last weeks which reported some issues, but\nunless I'm missing something I don't see any follow up from the reviewers on\nthe changes?\n\nI will still wait a bit to see if they chime in while I keep looking at the\ndiff since the last version of the code I reviewed.\n\nBut in the meantime I already saw a couple of things that don't look right:\n\n--- a/src/backend/commands/dropcmds.c\n+++ b/src/backend/commands/dropcmds.c\n@@ -481,6 +481,11 @@ does_not_exist_skipping(ObjectType objtype, Node *object)\n \t\t\tmsg = gettext_noop(\"publication \\\"%s\\\" does not exist, skipping\");\n \t\t\tname = strVal(object);\n \t\t\tbreak;\n+\t\tcase OBJECT_VARIABLE:\n+\t\t\tmsg = gettext_noop(\"session variable \\\"%s\\\" does not exist, skipping\");\n+\t\t\tname = NameListToString(castNode(List, object));\n+\t\t\tbreak;\n+\t\tdefault:\n\n \t\tcase OBJECT_COLUMN:\n\nthe \"default:\" seems like a thinko during a rebase?\n\n+Datum\n+GetSessionVariableWithTypeCheck(Oid varid, bool *isNull, Oid expected_typid)\n+{\n+\tSVariable\tsvar;\n+\n+\tsvar = prepare_variable_for_reading(varid);\n+\tAssert(svar != NULL && svar->is_valid);\n+\n+\treturn CopySessionVariableWithTypeCheck(varid, isNull, expected_typid);\n+\n+\tif (expected_typid != svar->typid)\n+\t\telog(ERROR, \"type of variable \\\"%s.%s\\\" is different than expected\",\n+\t\t\t get_namespace_name(get_session_variable_namespace(varid)),\n+\t\t\t get_session_variable_name(varid));\n+\n+\t*isNull = svar->isnull;\n+\n+\treturn svar->value;\n+}\n\nthere's a unconditional return in the middle of the function, which also looks\nlike a thinko during a rebase since CopySessionVariableWithTypeCheck mostly\ncontains the same following code.\n\nI'm also wondering if there should be additional tests based on the last\nscenario reported by Dmitry? (I don't see any new or similar test, but I may\nhave missed it).\n\n> > > Why do you think so? The variable has no mvcc support - it is just stored\n> > > value with local visibility without mvcc support. There can be little bit\n> > > similar issues like with global temporary tables.\n> >\n> > Yeah, sorry for not being precise, I mean global temporary tables. This\n> > is not my analysis, I've simply picked up it was mentioned a couple of\n> > times here. The points above are not meant to serve as an objection\n> > against the patch, but rather to figure out if there are any gaps left\n> > to address and come up with some sort of plan with \"committed\" as a\n> > final destination.\n> >\n>\n> There are some similarities, but there are a lot of differences too.\n> Handling of metadata is partially similar, but session variable is almost\n> the value cached in session memory. It has no statistics, it is not stored\n> in a file. Because there is different storage, I don't think there is some\n> intersection on implementation level.\n\n+1\n\n\n", "msg_date": "Fri, 6 Jan 2023 12:10:55 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15 (typo)" }, { "msg_contents": "pá 6. 1. 2023 v 5:11 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi,\n>\n> On Fri, Dec 23, 2022 at 08:38:43AM +0100, Pavel Stehule wrote:\n> >\n> > I am sending an updated patch, fixing the mentioned issue. Big thanks for\n> > testing, and checking.\n>\n> There were multiple reviews in the last weeks which reported some issues,\n> but\n> unless I'm missing something I don't see any follow up from the reviewers\n> on\n> the changes?\n>\n> I will still wait a bit to see if they chime in while I keep looking at the\n> diff since the last version of the code I reviewed.\n>\n> But in the meantime I already saw a couple of things that don't look right:\n>\n> --- a/src/backend/commands/dropcmds.c\n> +++ b/src/backend/commands/dropcmds.c\n> @@ -481,6 +481,11 @@ does_not_exist_skipping(ObjectType objtype, Node\n> *object)\n> msg = gettext_noop(\"publication \\\"%s\\\" does not\n> exist, skipping\");\n> name = strVal(object);\n> break;\n> + case OBJECT_VARIABLE:\n> + msg = gettext_noop(\"session variable \\\"%s\\\" does\n> not exist, skipping\");\n> + name = NameListToString(castNode(List, object));\n> + break;\n> + default:\n>\n> case OBJECT_COLUMN:\n>\n> the \"default:\" seems like a thinko during a rebase?\n>\n\nremoved\n\n\n\n\n>\n> +Datum\n> +GetSessionVariableWithTypeCheck(Oid varid, bool *isNull, Oid\n> expected_typid)\n> +{\n> + SVariable svar;\n> +\n> + svar = prepare_variable_for_reading(varid);\n> + Assert(svar != NULL && svar->is_valid);\n> +\n> + return CopySessionVariableWithTypeCheck(varid, isNull,\n> expected_typid);\n> +\n> + if (expected_typid != svar->typid)\n> + elog(ERROR, \"type of variable \\\"%s.%s\\\" is different than\n> expected\",\n> +\n> get_namespace_name(get_session_variable_namespace(varid)),\n> + get_session_variable_name(varid));\n> +\n> + *isNull = svar->isnull;\n> +\n> + return svar->value;\n> +}\n>\n> there's a unconditional return in the middle of the function, which also\n> looks\n> like a thinko during a rebase since CopySessionVariableWithTypeCheck mostly\n> contains the same following code.\n>\n\nThis looks like my mistake - originally I fixed an issue related to the\nusage session variable from plpgsql, and I forced a returned copy of the\nsession variable's value. Now, the function\nGetSessionVariableWithTypeCheck is not used everywhere. It can be used only\nfrom extensions, where is ensured so a) the value is not changed, b) and in\na lifetime of returned value is not called any query or any expression that\ncan change the value of this variable. I fixed this code and I enhanced\ncomments. I am not sure if this function should not be removed. It is not\nused by backend, but it can be handy for extensions - it reduces possible\nuseless copy.\n\n\n> I'm also wondering if there should be additional tests based on the last\n> scenario reported by Dmitry? (I don't see any new or similar test, but I\n> may\n> have missed it).\n>\n\nThe scenario reported by Dmitry is in tests already. I am not sure if I\nhave to repeat it with active debug_discard_cache. I expect this mode will\nbe activated in some testing environments.\n\nWhen I checked regress tests, then debug_discard_caches is set only to zero\n(in one case).\n\nI have no idea how to simply emulate this issue without\ndebug_discard_caches on 1. It is necessary to raise the sinval message\nexactly when the variable is checked against system catalogue.\n\nupdated patches attached\n\nRegards\n\nPavel\n\n\n\n>\n> > > > Why do you think so? The variable has no mvcc support - it is just\n> stored\n> > > > value with local visibility without mvcc support. There can be\n> little bit\n> > > > similar issues like with global temporary tables.\n> > >\n> > > Yeah, sorry for not being precise, I mean global temporary tables. This\n> > > is not my analysis, I've simply picked up it was mentioned a couple of\n> > > times here. The points above are not meant to serve as an objection\n> > > against the patch, but rather to figure out if there are any gaps left\n> > > to address and come up with some sort of plan with \"committed\" as a\n> > > final destination.\n> > >\n> >\n> > There are some similarities, but there are a lot of differences too.\n> > Handling of metadata is partially similar, but session variable is almost\n> > the value cached in session memory. It has no statistics, it is not\n> stored\n> > in a file. Because there is different storage, I don't think there is\n> some\n> > intersection on implementation level.\n>\n> +1\n>", "msg_date": "Fri, 6 Jan 2023 20:02:41 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15 (typo)" }, { "msg_contents": "Hi,\n\nOn Fri, Jan 06, 2023 at 08:02:41PM +0100, Pavel Stehule wrote:\n> p� 6. 1. 2023 v 5:11 odes�latel Julien Rouhaud <rjuju123@gmail.com> napsal:\n>\n> >\n> > +Datum\n> > +GetSessionVariableWithTypeCheck(Oid varid, bool *isNull, Oid\n> > expected_typid)\n> > +{\n> > + SVariable svar;\n> > +\n> > + svar = prepare_variable_for_reading(varid);\n> > + Assert(svar != NULL && svar->is_valid);\n> > +\n> > + return CopySessionVariableWithTypeCheck(varid, isNull,\n> > expected_typid);\n> > +\n> > + if (expected_typid != svar->typid)\n> > + elog(ERROR, \"type of variable \\\"%s.%s\\\" is different than\n> > expected\",\n> > +\n> > get_namespace_name(get_session_variable_namespace(varid)),\n> > + get_session_variable_name(varid));\n> > +\n> > + *isNull = svar->isnull;\n> > +\n> > + return svar->value;\n> > +}\n> >\n> > there's a unconditional return in the middle of the function, which also\n> > looks\n> > like a thinko during a rebase since CopySessionVariableWithTypeCheck mostly\n> > contains the same following code.\n> >\n>\n> This looks like my mistake - originally I fixed an issue related to the\n> usage session variable from plpgsql, and I forced a returned copy of the\n> session variable's value. Now, the function\n> GetSessionVariableWithTypeCheck is not used anyywhere.\n\nOh I didn't check if it was used in the patchset.\n\n> It can be used only\n> from extensions, where is ensured so a) the value is not changed, b) and in\n> a lifetime of returned value is not called any query or any expression that\n> can change the value of this variable. I fixed this code and I enhanced\n> comments. I am not sure if this function should not be removed. It is not\n> used by backend, but it can be handy for extensions - it reduces possible\n> useless copy.\n\nHmm, how safe is it for third-party code to access the stored data directly\nrather than a copy? If it makes extension fragile if they're not careful\nenough with cache invalidation, or even give them a way to mess up with the\ndata directly, it's probably not a good idea to provide such an API.\n\n\n> > I'm also wondering if there should be additional tests based on the last\n> > scenario reported by Dmitry? (I don't see any new or similar test, but I\n> > may\n> > have missed it).\n> >\n>\n> The scenario reported by Dmitry is in tests already.\n\nOh, so I missed it sorry about that. I did some testing using\ndebug_discard_cache in the past and didn't run into this issue, so it's\nprobably due to a more recent changes or before such a test was added.\n\n> I am not sure if I\n> have to repeat it with active debug_discard_cache. I expect this mode will\n> be activated in some testing environments.\n\nYes, some buildfarm animal are configured to run with various\ndebug_discard_caches setting so it's not needed to override it in some tests\n(it makes testing time really slow, which will be annoying for everyone\nincluding old/slow buildfarm animals).\n\n> I have no idea how to simply emulate this issue without\n> debug_discard_caches on 1. It is necessary to raise the sinval message\n> exactly when the variable is checked against system catalogue.\n\nManually testing while setting locally debug_discard_cache should be enough.\n\n> updated patches attached\n\nThanks!\n\n\n", "msg_date": "Sat, 7 Jan 2023 13:37:19 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15 (typo)" }, { "msg_contents": "> > It can be used only\n> > from extensions, where is ensured so a) the value is not changed, b) and\n> in\n> > a lifetime of returned value is not called any query or any expression\n> that\n> > can change the value of this variable. I fixed this code and I enhanced\n> > comments. I am not sure if this function should not be removed. It is not\n> > used by backend, but it can be handy for extensions - it reduces possible\n> > useless copy.\n>\n> Hmm, how safe is it for third-party code to access the stored data directly\n> rather than a copy? If it makes extension fragile if they're not careful\n> enough with cache invalidation, or even give them a way to mess up with the\n> data directly, it's probably not a good idea to provide such an API.\n>\n\nok, I removed it\n\n\n\n\n\n\n>\n>\n> > > I'm also wondering if there should be additional tests based on the\n> last\n> > > scenario reported by Dmitry? (I don't see any new or similar test, but\n> I\n> > > may\n> > > have missed it).\n> > >\n> >\n> > The scenario reported by Dmitry is in tests already.\n>\n> Oh, so I missed it sorry about that. I did some testing using\n> debug_discard_cache in the past and didn't run into this issue, so it's\n> probably due to a more recent changes or before such a test was added.\n>\n> > I am not sure if I\n> > have to repeat it with active debug_discard_cache. I expect this mode\n> will\n> > be activated in some testing environments.\n>\n> Yes, some buildfarm animal are configured to run with various\n> debug_discard_caches setting so it's not needed to override it in some\n> tests\n> (it makes testing time really slow, which will be annoying for everyone\n> including old/slow buildfarm animals).\n>\n> > I have no idea how to simply emulate this issue without\n> > debug_discard_caches on 1. It is necessary to raise the sinval message\n> > exactly when the variable is checked against system catalogue.\n>\n> Manually testing while setting locally debug_discard_cache should be\n> enough.\n>\n> > updated patches attached\n>\n> Thanks!\n>\n\nI thank you\n\nPavel", "msg_date": "Sat, 7 Jan 2023 08:09:27 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15 (typo)" }, { "msg_contents": "Hi,\n\nOn Sat, Jan 07, 2023 at 08:09:27AM +0100, Pavel Stehule wrote:\n> >\n> > Hmm, how safe is it for third-party code to access the stored data directly\n> > rather than a copy? If it makes extension fragile if they're not careful\n> > enough with cache invalidation, or even give them a way to mess up with the\n> > data directly, it's probably not a good idea to provide such an API.\n> >\n>\n> ok, I removed it\n\nAnother new behavior I see is the new rowtype_only parameter for\nLookupVariable. Has this been discussed?\n\nI can see how it can be annoying to get a \"variable isn't composite\" type of\nerror when you already know that only a composite object can be used (and other\nmight work), but it looks really scary to entirely ignore some objects that\nshould be found in your search_path just because of their datatype.\n\nAnd if we ignore something like \"a.b\" if \"a\" isn't a variable of composite\ntype, why wouldn't we apply the same \"just ignore it\" rule if it's indeed a\ncomposite type but doesn't have any \"b\" field? Your application could also\nstart to use different object if your drop a say json variable and create a\ncomposite variable instead.\n\nIt seems to be in contradiction with how the rest of the system works and looks\nwrong to me. Note also that LookupVariable can be quite expensive since you\nmay have to do a lookup for every schema found in the search_path, so the\nsooner it stops the better.\n\n> > > updated patches attached\n\nI forgot to mention it last time but you should bump the copyright year for all\nnew files added when you'll send a new version of the patchset.\n\n\n", "msg_date": "Tue, 10 Jan 2023 10:20:25 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15 (typo)" }, { "msg_contents": "út 10. 1. 2023 v 3:20 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi,\n>\n> On Sat, Jan 07, 2023 at 08:09:27AM +0100, Pavel Stehule wrote:\n> > >\n> > > Hmm, how safe is it for third-party code to access the stored data\n> directly\n> > > rather than a copy? If it makes extension fragile if they're not\n> careful\n> > > enough with cache invalidation, or even give them a way to mess up\n> with the\n> > > data directly, it's probably not a good idea to provide such an API.\n> > >\n> >\n> > ok, I removed it\n>\n> Another new behavior I see is the new rowtype_only parameter for\n> LookupVariable. Has this been discussed?\n>\n\nI think so it was discussed about table shadowing\n\nwithout this filter, I lost the message \"missing FROM-clause entry for ...\"\n\n -- should fail\n SELECT varx.xxx;\n-ERROR: missing FROM-clause entry for table \"varx\"\n-LINE 1: SELECT varx.xxx;\n- ^\n+ERROR: type text is not composite\n -- don't allow multi column query\n CREATE TYPE vartesttp AS (a1 int, b1 int, c1 int);\n CREATE VARIABLE v1 AS vartesttp;\n@@ -1421,9 +1419,7 @@\n DROP TYPE ab;\n CREATE VARIABLE myvar AS int;\n SELECT myvar.blabla;\n-ERROR: missing FROM-clause entry for table \"myvar\"\n-LINE 1: SELECT myvar.blabla;\n- ^\n+ERROR: type integer is not composite\n DROP VARIABLE myvar;\n -- the result of view should be same in parallel mode too\n CREATE VARIABLE v1 AS int;\n\nMy original idea was to try to reduce possible conflicts (in old versions\nof this path, a conflict was disallowed). But it is true, so these \"new\"\nerror messages are sensible too, and with eliminating rowtype_only I can\nreduce code.\n\n\n\n> I can see how it can be annoying to get a \"variable isn't composite\" type\n> of\n> error when you already know that only a composite object can be used (and\n> other\n> might work), but it looks really scary to entirely ignore some objects that\n> should be found in your search_path just because of their datatype.\n>\n> And if we ignore something like \"a.b\" if \"a\" isn't a variable of composite\n> type, why wouldn't we apply the same \"just ignore it\" rule if it's indeed a\n> composite type but doesn't have any \"b\" field? Your application could also\n> start to use different object if your drop a say json variable and create a\n> composite variable instead.\n>\n\n> It seems to be in contradiction with how the rest of the system works and\n> looks\n> wrong to me. Note also that LookupVariable can be quite expensive since\n> you\n> may have to do a lookup for every schema found in the search_path, so the\n> sooner it stops the better.\n>\n\nI removed this filter\n\n\n>\n> > > > updated patches attached\n>\n> I forgot to mention it last time but you should bump the copyright year\n> for all\n> new files added when you'll send a new version of the patchset.\n>\n\nfixed\n\nI modified the IdentifyVariable function a little bit. With new argument\nnoerror I am able to ensure so no error will be raised when this function\nis called just for shadowing detection.\n\nRegards\n\nPavel", "msg_date": "Tue, 10 Jan 2023 20:35:16 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15 (typo)" }, { "msg_contents": "On Tue, Jan 10, 2023 at 08:35:16PM +0100, Pavel Stehule wrote:\n> �t 10. 1. 2023 v 3:20 odes�latel Julien Rouhaud <rjuju123@gmail.com> napsal:\n> >\n> > Another new behavior I see is the new rowtype_only parameter for\n> > LookupVariable. Has this been discussed?\n> >\n>\n> I think so it was discussed about table shadowing\n>\n> without this filter, I lost the message \"missing FROM-clause entry for ...\"\n>\n> -- should fail\n> SELECT varx.xxx;\n> -ERROR: missing FROM-clause entry for table \"varx\"\n> -LINE 1: SELECT varx.xxx;\n> - ^\n> +ERROR: type text is not composite\n> -- don't allow multi column query\n> CREATE TYPE vartesttp AS (a1 int, b1 int, c1 int);\n> CREATE VARIABLE v1 AS vartesttp;\n> @@ -1421,9 +1419,7 @@\n> DROP TYPE ab;\n> CREATE VARIABLE myvar AS int;\n> SELECT myvar.blabla;\n> -ERROR: missing FROM-clause entry for table \"myvar\"\n> -LINE 1: SELECT myvar.blabla;\n> - ^\n> +ERROR: type integer is not composite\n> DROP VARIABLE myvar;\n> -- the result of view should be same in parallel mode too\n> CREATE VARIABLE v1 AS int;\n>\n> My original idea was to try to reduce possible conflicts (in old versions\n> of this path, a conflict was disallowed). But it is true, so these \"new\"\n> error messages are sensible too, and with eliminating rowtype_only I can\n> reduce code.\n\nOk! Another problem is that the error message as-is is highly unhelpful as\nit's not clear at all that the problem is coming from an unsuitable variable.\nMaybe change makeParamSessionVariable to use lookup_rowtype_tupdesc_noerror()\nand emit a friendlier error message? Something like\n\nvariable \"X.Y\" is of type Z, which is not composite\n\n> I modified the IdentifyVariable function a little bit. With new argument\n> noerror I am able to ensure so no error will be raised when this function\n> is called just for shadowing detection.\n\nI locally modified IdentifyVariable to emit WARNING reports when noerror is set\nto quickly see how it was used and didn't get any regression test error. This\ndefinitely needs to be covered by regression tests. Looking as\nsession_variables.sql, the session_variables_ambiguity_warning GUC doesn't have\na lot of tests in general.\n\n\n", "msg_date": "Wed, 11 Jan 2023 17:08:08 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15 (typo)" }, { "msg_contents": "st 11. 1. 2023 v 10:08 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> On Tue, Jan 10, 2023 at 08:35:16PM +0100, Pavel Stehule wrote:\n> > út 10. 1. 2023 v 3:20 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n> napsal:\n> > >\n> > > Another new behavior I see is the new rowtype_only parameter for\n> > > LookupVariable. Has this been discussed?\n> > >\n> >\n> > I think so it was discussed about table shadowing\n> >\n> > without this filter, I lost the message \"missing FROM-clause entry for\n> ...\"\n> >\n> > -- should fail\n> > SELECT varx.xxx;\n> > -ERROR: missing FROM-clause entry for table \"varx\"\n> > -LINE 1: SELECT varx.xxx;\n> > - ^\n> > +ERROR: type text is not composite\n> > -- don't allow multi column query\n> > CREATE TYPE vartesttp AS (a1 int, b1 int, c1 int);\n> > CREATE VARIABLE v1 AS vartesttp;\n> > @@ -1421,9 +1419,7 @@\n> > DROP TYPE ab;\n> > CREATE VARIABLE myvar AS int;\n> > SELECT myvar.blabla;\n> > -ERROR: missing FROM-clause entry for table \"myvar\"\n> > -LINE 1: SELECT myvar.blabla;\n> > - ^\n> > +ERROR: type integer is not composite\n> > DROP VARIABLE myvar;\n> > -- the result of view should be same in parallel mode too\n> > CREATE VARIABLE v1 AS int;\n> >\n> > My original idea was to try to reduce possible conflicts (in old versions\n> > of this path, a conflict was disallowed). But it is true, so these \"new\"\n> > error messages are sensible too, and with eliminating rowtype_only I can\n> > reduce code.\n>\n> Ok! Another problem is that the error message as-is is highly unhelpful as\n> it's not clear at all that the problem is coming from an unsuitable\n> variable.\n> Maybe change makeParamSessionVariable to use\n> lookup_rowtype_tupdesc_noerror()\n> and emit a friendlier error message? Something like\n>\n> variable \"X.Y\" is of type Z, which is not composite\n>\n\ndone\n\n\n>\n> > I modified the IdentifyVariable function a little bit. With new argument\n> > noerror I am able to ensure so no error will be raised when this function\n> > is called just for shadowing detection.\n>\n> I locally modified IdentifyVariable to emit WARNING reports when noerror\n> is set\n> to quickly see how it was used and didn't get any regression test error.\n> This\n> definitely needs to be covered by regression tests. Looking as\n> session_variables.sql, the session_variables_ambiguity_warning GUC doesn't\n> have\n> a lot of tests in general.\n>\n\nI enhanced regress tests about this scenario\n\nRegards\n\nPavel", "msg_date": "Mon, 16 Jan 2023 21:27:28 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15 (typo)" }, { "msg_contents": "I've accumulated another collection of various questions and comments. As a\nside note I'm getting a good feeling about this patch, those part I've read so\nfar looks good to me.\n\n* I've suddenly realized one could use pseudo types for variables, and\n it not always works. E.g.:\n\n =# create variable pseudo_array anyarray;\n =# select pseudo_array;\n pseudo_array\n --------------\n NULL\n\n =# let pseudo_array = ARRAY[1, 2, 3];\n ERROR: 42804: target session variable is of type anyarray but expression is of type integer[]\n LOCATION: svariableStartupReceiver, svariableReceiver.c:79\n\n =# create variable pseudo_unknown unknown;\n =# select pseudo_unknown;\n ERROR: XX000: failed to find conversion function from unknown to text\n LOCATION: coerce_type, parse_coerce.c:542\n\n Is it supposed to be like this, or something is missing?\n\n* I think it was already mentioned in the thread, there seems to be not a\n single usage of CHECK_FOR_INTERRUPTS in session_variable.c . But some number\n of loops over the sessionvars are implemented, are they always going to be\n small enough to not make any troubles?\n\n* sync_sessionvars_all explains why is it necessary to copy xact_recheck_varids:\n\n\t\t When we check the variables, the system cache can be invalidated,\n\t\t and xact_recheck_varids can be enhanced.\n\n I'm not quite following what the \"enhancement\" part is about? Is\n xact_recheck_varids could be somehow updated concurrently with the loop?\n\n* A small typo\n\n\tdiff --git a/src/backend/commands/session_variable.c b/src/backend/commands/session_variable.c\n\t--- a/src/backend/commands/session_variable.c\n\t+++ b/src/backend/commands/session_variable.c\n\t@@ -485,7 +485,7 @@ sync_sessionvars_all(bool filter_lxid)\n\n\t\t\t/*\n\t\t\t * When we check the variables, the system cache can be invalidated,\n\t- * and xac_recheck_varids can be enhanced. We want to iterate\n\t+ * and xact_recheck_varids can be enhanced. We want to iterate\n\nNOTE: The commentaries above were made based on the previous patch version, but\nit looks like those aspects were not changed.\n\n\n", "msg_date": "Fri, 20 Jan 2023 21:33:44 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15 (typo)" }, { "msg_contents": "pá 20. 1. 2023 v 21:35 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\nnapsal:\n\n> I've accumulated another collection of various questions and comments. As a\n> side note I'm getting a good feeling about this patch, those part I've\n> read so\n> far looks good to me.\n>\n> * I've suddenly realized one could use pseudo types for variables, and\n> it not always works. E.g.:\n>\n> =# create variable pseudo_array anyarray;\n> =# select pseudo_array;\n> pseudo_array\n> --------------\n> NULL\n>\n> =# let pseudo_array = ARRAY[1, 2, 3];\n> ERROR: 42804: target session variable is of type anyarray but\n> expression is of type integer[]\n> LOCATION: svariableStartupReceiver, svariableReceiver.c:79\n>\n> =# create variable pseudo_unknown unknown;\n> =# select pseudo_unknown;\n> ERROR: XX000: failed to find conversion function from unknown to text\n> LOCATION: coerce_type, parse_coerce.c:542\n>\n> Is it supposed to be like this, or something is missing?\n>\n\nit is my oversight - it should be disallowed\n\ndone\n\n\n\n>\n> * I think it was already mentioned in the thread, there seems to be not a\n> single usage of CHECK_FOR_INTERRUPTS in session_variable.c . But some\n> number\n> of loops over the sessionvars are implemented, are they always going to\n> be\n> small enough to not make any troubles?\n>\n\nThe longest cycle is a cycle that rechecks actively used variables against\nsystem catalog. No cycle depends on the content of variables.\n\n\n>\n> * sync_sessionvars_all explains why is it necessary to copy\n> xact_recheck_varids:\n>\n> When we check the variables, the system cache can be\n> invalidated,\n> and xact_recheck_varids can be enhanced.\n>\n> I'm not quite following what the \"enhancement\" part is about? Is\n> xact_recheck_varids could be somehow updated concurrently with the loop?\n>\n\nYes. pg_variable_cache_callback can be called when\nis_session_variable_valid is executed.\n\nMaybe \"extended\" can be a better word instead of \"enhanced\"? I reformulated\nthis comment\n\n\n\n>\n> * A small typo\n>\n> diff --git a/src/backend/commands/session_variable.c\n> b/src/backend/commands/session_variable.c\n> --- a/src/backend/commands/session_variable.c\n> +++ b/src/backend/commands/session_variable.c\n> @@ -485,7 +485,7 @@ sync_sessionvars_all(bool filter_lxid)\n>\n> /*\n> * When we check the variables, the system cache\n> can be invalidated,\n> - * and xac_recheck_varids can be enhanced. We want to\n> iterate\n> + * and xact_recheck_varids can be enhanced. We want to\n> iterate\n>\n>\nfixed\n\n\n> NOTE: The commentaries above were made based on the previous patch\n> version, but\n> it looks like those aspects were not changed.\n>\n\nThank you for comments, updated rebased patch assigned\n\nRegards\n\nPavel", "msg_date": "Sun, 22 Jan 2023 19:47:07 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15 (typo)" }, { "msg_contents": "> On Sun, Jan 22, 2023 at 07:47:07PM +0100, Pavel Stehule wrote:\n> p� 20. 1. 2023 v 21:35 odes�latel Dmitry Dolgov <9erthalion6@gmail.com>\n> napsal:\n>\n> > * I think it was already mentioned in the thread, there seems to be not a\n> > single usage of CHECK_FOR_INTERRUPTS in session_variable.c . But some\n> > number\n> > of loops over the sessionvars are implemented, are they always going to\n> > be\n> > small enough to not make any troubles?\n> >\n>\n> The longest cycle is a cycle that rechecks actively used variables against\n> system catalog. No cycle depends on the content of variables.\n\nRight, but what about the cases with huge number of variables? Not\nsaying it could be in any sense common, but possible to do.\n\n> > * sync_sessionvars_all explains why is it necessary to copy\n> > xact_recheck_varids:\n> >\n> > When we check the variables, the system cache can be\n> > invalidated,\n> > and xact_recheck_varids can be enhanced.\n> >\n> > I'm not quite following what the \"enhancement\" part is about? Is\n> > xact_recheck_varids could be somehow updated concurrently with the loop?\n> >\n>\n> Yes. pg_variable_cache_callback can be called when\n> is_session_variable_valid is executed.\n>\n> Maybe \"extended\" can be a better word instead of \"enhanced\"? I reformulated\n> this comment\n\nYeah, \"extended\" sounds better. But I was mostly confused about the\nmechanism, if the cache callback can interrupt the execution at any\nmoment when called, that would explain it.\n\n\n", "msg_date": "Mon, 23 Jan 2023 15:25:54 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15 (typo)" }, { "msg_contents": "Hi\n\nsmall change in regress test - try to stabilized tests with enabled\nWRITE_READ_PARSE_PLAN_TREES due an issue with catalogfield of RangeVar\nstructure\n\nRegards\n\nPavel", "msg_date": "Mon, 23 Jan 2023 18:42:55 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15 (typo)" }, { "msg_contents": "po 23. 1. 2023 v 15:25 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\nnapsal:\n\n> > On Sun, Jan 22, 2023 at 07:47:07PM +0100, Pavel Stehule wrote:\n> > pá 20. 1. 2023 v 21:35 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\n> > napsal:\n> >\n> > > * I think it was already mentioned in the thread, there seems to be\n> not a\n> > > single usage of CHECK_FOR_INTERRUPTS in session_variable.c . But some\n> > > number\n> > > of loops over the sessionvars are implemented, are they always going\n> to\n> > > be\n> > > small enough to not make any troubles?\n> > >\n> >\n> > The longest cycle is a cycle that rechecks actively used variables\n> against\n> > system catalog. No cycle depends on the content of variables.\n>\n> Right, but what about the cases with huge number of variables? Not\n> saying it could be in any sense common, but possible to do.\n>\n\nNow I tested 10K variables (with enabled assertions, without it is should\nbe faster)\n\ncreating 763ms\n\ndo $$ begin for i in 1..10000 loop execute format('create variable %I as\nint', 'xx' || i); end loop; end $$;\n\nassigning 491ms\n\ndo $$ begin for i in 1..10000 loop execute format('let %I = 10', 'xx' ||\ni); end loop; end $$;\n\ndropping without necessity of memory cleaning 1155ms\n\ndo $$ begin for i in 1..10000 loop execute format('drop variable %I', 'xx'\n|| i); end loop; end $$;\n\ndropping with memory cleaning 2708\n\njust memory cleaning 72ms (time of commit - at commit cleaning)\n\ndo $$ begin for i in 1..10000 loop execute format('let %I = 10', 'xx' ||\ni); end loop; end $$;\nbegin;\ndo $$ begin for i in 1..10000 loop execute format('drop variable %I', 'xx'\n|| i); end loop; end $$;\ncommit;\n\nSo just syncing (cleaning 10K) variables needs less than 72 ms\n\nI can be wrong, but from these numbers I don't think so these sync cycles\nshould to contain CHECK_FOR_INTERRUPTS\n\nWhat do you think?\n\n\n\n> > > * sync_sessionvars_all explains why is it necessary to copy\n> > > xact_recheck_varids:\n> > >\n> > > When we check the variables, the system cache can be\n> > > invalidated,\n> > > and xact_recheck_varids can be enhanced.\n> > >\n> > > I'm not quite following what the \"enhancement\" part is about? Is\n> > > xact_recheck_varids could be somehow updated concurrently with the\n> loop?\n> > >\n> >\n> > Yes. pg_variable_cache_callback can be called when\n> > is_session_variable_valid is executed.\n> >\n> > Maybe \"extended\" can be a better word instead of \"enhanced\"? I\n> reformulated\n> > this comment\n>\n> Yeah, \"extended\" sounds better. But I was mostly confused about the\n> mechanism, if the cache callback can interrupt the execution at any\n> moment when called, that would explain it.\n>\n\npatch from yesterday has extended comment in this area :-)\n\nRegards\n\nPavel\n\npo 23. 1. 2023 v 15:25 odesílatel Dmitry Dolgov <9erthalion6@gmail.com> napsal:> On Sun, Jan 22, 2023 at 07:47:07PM +0100, Pavel Stehule wrote:\n> pá 20. 1. 2023 v 21:35 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\n> napsal:\n>\n> > * I think it was already mentioned in the thread, there seems to be not a\n> >   single usage of CHECK_FOR_INTERRUPTS in session_variable.c . But some\n> > number\n> >   of loops over the sessionvars are implemented, are they always going to\n> > be\n> >   small enough to not make any troubles?\n> >\n>\n> The longest cycle is a cycle that rechecks actively used variables against\n> system catalog. No cycle depends on the content of variables.\n\nRight, but what about the cases with huge number of variables? Not\nsaying it could be in any sense common, but possible to do.Now I tested 10K variables (with enabled assertions,  without it is should be faster)creating  763msdo $$ begin for i in 1..10000 loop execute format('create variable %I as int', 'xx' || i); end loop; end $$;assigning 491msdo $$ begin for i in 1..10000 loop execute format('let %I = 10', 'xx' || i); end loop; end $$;dropping without necessity of memory cleaning 1155msdo $$ begin for i in 1..10000 loop execute format('drop variable %I', 'xx' || i); end loop; end $$;dropping with memory cleaning 2708just memory cleaning 72ms (time of commit - at commit cleaning)do $$ begin for i in 1..10000 loop execute format('let %I = 10', 'xx' || i); end loop; end $$;begin;do $$ begin for i in 1..10000 loop execute format('drop variable %I', 'xx' || i); end loop; end $$;commit;So just syncing (cleaning 10K) variables needs less than 72 msI can be wrong, but from these numbers I don't think so these sync cycles should to contain CHECK_FOR_INTERRUPTSWhat do you think?\n\n> > * sync_sessionvars_all explains why is it necessary to copy\n> > xact_recheck_varids:\n> >\n> >                  When we check the variables, the system cache can be\n> > invalidated,\n> >                  and xact_recheck_varids can be enhanced.\n> >\n> >   I'm not quite following what the \"enhancement\" part is about? Is\n> >   xact_recheck_varids could be somehow updated concurrently with the loop?\n> >\n>\n> Yes. pg_variable_cache_callback can be called when\n> is_session_variable_valid is executed.\n>\n> Maybe \"extended\" can be a better word instead of \"enhanced\"? I reformulated\n> this comment\n\nYeah, \"extended\" sounds better. But I was mostly confused about the\nmechanism, if the cache callback can interrupt the execution at any\nmoment when called, that would explain it.patch from yesterday has extended comment in this area :-) RegardsPavel", "msg_date": "Mon, 23 Jan 2023 19:09:27 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15 (typo)" }, { "msg_contents": "Hi\n\nAfter\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=3cece34be842178a3c5697a58e03fb4a5d576378\nis not necessary workaround for WRITE_READ_PARSE_PLAN_TREES\n\nRegards\n\nPavel", "msg_date": "Mon, 23 Jan 2023 21:09:07 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15 (typo)" }, { "msg_contents": "> On Mon, Jan 23, 2023 at 07:09:27PM +0100, Pavel Stehule wrote:\n> po 23. 1. 2023 v 15:25 odes�latel Dmitry Dolgov <9erthalion6@gmail.com>\n> napsal:\n>\n> > > On Sun, Jan 22, 2023 at 07:47:07PM +0100, Pavel Stehule wrote:\n> > > p� 20. 1. 2023 v 21:35 odes�latel Dmitry Dolgov <9erthalion6@gmail.com>\n> > > napsal:\n> > >\n> > > > * I think it was already mentioned in the thread, there seems to be\n> > not a\n> > > > single usage of CHECK_FOR_INTERRUPTS in session_variable.c . But some\n> > > > number\n> > > > of loops over the sessionvars are implemented, are they always going\n> > to\n> > > > be\n> > > > small enough to not make any troubles?\n> > > >\n> > >\n> > > The longest cycle is a cycle that rechecks actively used variables\n> > against\n> > > system catalog. No cycle depends on the content of variables.\n> >\n> > Right, but what about the cases with huge number of variables? Not\n> > saying it could be in any sense common, but possible to do.\n> >\n>\n> Now I tested 10K variables (with enabled assertions, without it is should\n> be faster)\n>\n> [...]\n>\n> I can be wrong, but from these numbers I don't think so these sync cycles\n> should to contain CHECK_FOR_INTERRUPTS\n>\n> What do you think?\n\nWell, there is always possibility someone will create more variables\nthan any arbitrary limit we have tested for. But I see your point and\ndon't have a strong opinion about this, so let's keep it as it is :)\n\n> > > > * sync_sessionvars_all explains why is it necessary to copy\n> > > > xact_recheck_varids:\n> > > >\n> > > > When we check the variables, the system cache can be\n> > > > invalidated,\n> > > > and xact_recheck_varids can be enhanced.\n> > > >\n> > > > I'm not quite following what the \"enhancement\" part is about? Is\n> > > > xact_recheck_varids could be somehow updated concurrently with the\n> > loop?\n> > > >\n> > >\n> > > Yes. pg_variable_cache_callback can be called when\n> > > is_session_variable_valid is executed.\n> > >\n> > > Maybe \"extended\" can be a better word instead of \"enhanced\"? I\n> > reformulated\n> > > this comment\n> >\n> > Yeah, \"extended\" sounds better. But I was mostly confused about the\n> > mechanism, if the cache callback can interrupt the execution at any\n> > moment when called, that would explain it.\n> >\n>\n> patch from yesterday has extended comment in this area :-)\n\nThanks!\n\n\n", "msg_date": "Tue, 24 Jan 2023 10:56:44 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15 (typo)" }, { "msg_contents": ">\n> > I can be wrong, but from these numbers I don't think so these sync cycles\n> > should to contain CHECK_FOR_INTERRUPTS\n> >\n> > What do you think?\n>\n> Well, there is always possibility someone will create more variables\n> than any arbitrary limit we have tested for. But I see your point and\n> don't have a strong opinion about this, so let's keep it as it is :)\n>\n>\nIn this case, I afraid more about possible impacts of canceling than long\noperation.\n\nIt should be possible to cancel query - but you cannot to cancel followup\noperation like memory cleaning or other resource releasing.\n\nThe possibility to be cancelled in this cycle means rewriting processing\nto be much more defensive (and slower). And although you can hypothetically\ncancel sync cycles, then you should to some time finish these cycles\nbecause you need to clean memory from garbage.\n\nRegards\n\nPavel\n\n\n\n\n\n\n\nok :)\n\nIf it is an issue, then it can be easily fixed at future, but I don't think\n\nI\n\n\n>\n> I can be wrong, but from these numbers I don't think so these sync cycles\n> should to contain CHECK_FOR_INTERRUPTS\n>\n> What do you think?\n\nWell, there is always possibility someone will create more variables\nthan any arbitrary limit we have tested for. But I see your point and\ndon't have a strong opinion about this, so let's keep it as it is :)\nIn this case, I afraid more about possible impacts of canceling than long operation.It should be possible to cancel query - but you cannot to cancel followup operation like memory cleaning or other resource releasing.The possibility to be cancelled in this cycle means rewriting  processing to be much more defensive (and slower). And although you can hypothetically cancel sync cycles, then you should to some time finish these cycles because you need to clean memory from garbage.RegardsPavelok :)If it is an issue, then it can be easily fixed at future, but I don't thinkI", "msg_date": "Tue, 24 Jan 2023 12:20:51 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15 (typo)" }, { "msg_contents": "Hi\n\nfresh rebase\n\nRegards\n\nPavel", "msg_date": "Thu, 2 Feb 2023 19:35:28 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15 (typo)" }, { "msg_contents": "Hi\r\n\r\nI read notes from the FOSDEM developer meeting, and I would like to repeat\r\nnotice about motivation for introduction of session variables, and one\r\nreason why session_variables are not transactional, and why they should not\r\nbe replaced by temp tables is performance.\r\n\r\nThere are more use cases where session variables can be used. One scenario\r\nfor session variables is to use them like static variables. They can be\r\nused from some rows triggers, .. where local variable is not enough\r\n(like\r\nhttps://www.cybertec-postgresql.com/en/why-are-my-postgresql-updates-getting-slower/\r\n)\r\n\r\ncreate variable xx as int;\r\n\r\ndo $$\r\nbegin\r\n let xx = 1;\r\n for i in 1..10000 loop\r\n let xx = xx + 1;\r\n end loop;\r\n raise notice '%', xx;\r\nend;\r\n$$;\r\nNOTICE: 10001\r\nDO\r\nTime: 4,079 ms\r\n\r\ncreate temp table xx01(a int);\r\ndelete from xx01; vacuum full xx01; vacuum;\r\n\r\ndo $$\r\nbegin\r\n insert into xx01 values(1);\r\n for i in 1..10000 loop\r\n update xx01 set a = a + 1;\r\n end loop;\r\n raise notice '%', (select a from xx01);\r\nend;\r\n$$;\r\nNOTICE: 10001\r\nDO\r\nTime: 1678,949 ms (00:01,679)\r\n\r\npostgres=# \\dt+ xx01\r\n List of relations\r\n┌───────────┬──────┬───────┬───────┬─────────────┬───────────────┬────────┬─────────────┐\r\n│ Schema │ Name │ Type │ Owner │ Persistence │ Access method │ Size │\r\nDescription │\r\n╞═══════════╪══════╪═══════╪═══════╪═════════════╪═══════════════╪════════╪═════════════╡\r\n│ pg_temp_3 │ xx01 │ table │ pavel │ temporary │ heap │ 384 kB │\r\n │\r\n└───────────┴──────┴───────┴───────┴─────────────┴───────────────┴────────┴─────────────┘\r\n(1 row)\r\n\r\nOriginally, I tested 100K iterations, but it was too slow, and I cancelled\r\nit after 5 minutes. Vacuum can be done after the end of transaction.\r\n\r\nAnd there can be another negative impact related to bloating of\r\npg_attribute, pg_class, pg_depend tables.\r\n\r\nWorkaround based on custom GUC is not too bad, but there is not any\r\npossibility of security protection (and there is not any possibility of\r\nstatic check in plpgsql_check) - and still it is 20x slower than session\r\nvariables\r\n\r\ndo $$\r\nbegin\r\n perform set_config('cust.xx', '1', false);\r\n for i in 1..10000 loop\r\n perform set_config('cust.xx', (current_setting('cust.xx')::int +\r\n1)::text, true);\r\n end loop;\r\n raise notice '%', current_setting('cust.xx');\r\nend;\r\n$$;\r\nNOTICE: 10001\r\nDO\r\nTime: 80,201 ms\r\n\r\nSession variables don't try to replace temp tables, and temp tables can be\r\na very bad replacement of session's variables.\r\n\r\nRegards\r\n\r\nPavel\r\n\nHiI read notes from the FOSDEM developer meeting, and I would like to repeat notice about motivation for introduction of session variables, and one reason why session_variables are not transactional, and why they should not be replaced by temp tables is performance.There are more use cases where session variables can be used. One scenario for session variables is to use them like static variables. They can be used from some rows triggers, .. where local variable is not enough (like https://www.cybertec-postgresql.com/en/why-are-my-postgresql-updates-getting-slower/ )create variable xx as int;do $$                         begin   let xx = 1;  for i in 1..10000 loop    let xx = xx + 1;  end loop;  raise notice '%', xx;end;$$;NOTICE:  10001DOTime: 4,079 mscreate temp table xx01(a int);delete from xx01; vacuum full xx01; vacuum;do $$                                  begin  insert into xx01 values(1);  for i in 1..10000 loop    update xx01 set a = a + 1;  end loop;  raise notice '%', (select a from xx01);end;$$;NOTICE:  10001DOTime: 1678,949 ms (00:01,679)postgres=# \\dt+ xx01                                    List of relations┌───────────┬──────┬───────┬───────┬─────────────┬───────────────┬────────┬─────────────┐│  Schema   │ Name │ Type  │ Owner │ Persistence │ Access method │  Size  │ Description │╞═══════════╪══════╪═══════╪═══════╪═════════════╪═══════════════╪════════╪═════════════╡│ pg_temp_3 │ xx01 │ table │ pavel │ temporary   │ heap          │ 384 kB │             │└───────────┴──────┴───────┴───────┴─────────────┴───────────────┴────────┴─────────────┘(1 row)Originally, I tested 100K iterations, but it was too slow, and I cancelled it after 5 minutes. Vacuum can be done after the end of transaction. And there can be another negative impact related to bloating of pg_attribute, pg_class, pg_depend tables.Workaround based on custom GUC is not too bad, but there is not any possibility of  security protection (and there is not any possibility of static check in plpgsql_check) - and still it is 20x slower than session variablesdo $$begin  perform set_config('cust.xx', '1', false);  for i in 1..10000 loop    perform set_config('cust.xx', (current_setting('cust.xx')::int + 1)::text, true);  end loop;  raise notice '%', current_setting('cust.xx');end;$$;NOTICE:  10001DOTime: 80,201 msSession variables don't try to replace temp tables, and temp tables can be a very bad replacement of session's variables. RegardsPavel", "msg_date": "Fri, 3 Feb 2023 21:33:52 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15 (typo)" }, { "msg_contents": "Hi\n\nfix tests after\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=faff8f8e47f18c7d589453e2e0d841d2bd96c1ac\n\nRegards\n\nPavel", "msg_date": "Mon, 6 Feb 2023 11:47:13 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15 (typo)" }, { "msg_contents": "Hi\n\nfresh rebase\n\nRegards\n\nPavel", "msg_date": "Tue, 28 Feb 2023 06:12:50 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "> On Tue, Feb 28, 2023 at 06:12:50AM +0100, Pavel Stehule wrote:\n>\n> fresh rebase\n\nI'm continuing to review, this time going through shadowing stuff in\ntransformColumnRef, IdentifyVariable etc. Well, that's a lot of leg work\nfor rather little outcome :) I guess all attempts to simplify this part\nweren't successful?\n\nCouple of questions to it. In IdentifyVariable in the branch handling\ntwo values the commentary says:\n\n /*\n * a.b can mean \"schema\".\"variable\" or \"variable\".\"field\",\n * Check both variants, and returns InvalidOid with\n * not_unique flag, when both interpretations are\n * possible. Second node can be star. In this case, the\n * only allowed possibility is \"variable\".\"*\".\n */\n\nI read this as \"variable\".\"*\" is a valid combination, but the very next\npart of this condition says differently:\n\n /*\n * Session variables doesn't support unboxing by star\n * syntax. But this syntax have to be calculated here,\n * because can come from non session variables related\n * expressions.\n */\n Assert(IsA(field2, A_Star));\n\nIs the first commentary not quite correct?\n\nAnother question about how shadowing warning should work between namespaces.\nLet's say I've got two namespaces, public and test, both have a session\nvariable with the same name, but only one has a table with the same name:\n\n -- in public\n create table test_agg(a int);\n create type for_test_agg as (a int);\n create variable test_agg for_test_agg;\n\n -- in test\n create type for_test_agg as (a int);\n create variable test_agg for_test_agg;\n\nNow if we will try to trigger the shadowing warning from public\nnamespace, it would work differently:\n\n -- in public\n =# let test.test_agg.a = 10;\n =# let test_agg.a = 20;\n =# set session_variables_ambiguity_warning to on;\n\n\t-- note the value returned from the table\n\t=# select jsonb_agg(test_agg.a) from test_agg;\n\tWARNING: 42702: session variable \"test_agg.a\" is shadowed\n\tLINE 1: select jsonb_agg(test_agg.a) from test_agg;\n\t\t\t\t\t\t\t ^\n\tDETAIL: Session variables can be shadowed by columns, routine's variables and routine's arguments with the same name.\n\tLOCATION: transformColumnRef, parse_expr.c:940\n\t jsonb_agg\n\t-----------\n\t [1]\n\n\t-- no warning, note the session variable value\n\t=# select jsonb_agg(test.test_agg.a) from test_agg;\n\t jsonb_agg\n\t-----------\n\t [10]\n\nIt happens because in the second scenario the logic inside transformColumnRef\nwill not set up the node variable (there is no corresponding table in the\n\"test\" schema), and the following conditions covering session variables\nshadowing are depending on it. Is it supposed to be like this?\n\n\n", "msg_date": "Fri, 3 Mar 2023 21:17:57 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "pá 3. 3. 2023 v 21:19 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\nnapsal:\n\n> > On Tue, Feb 28, 2023 at 06:12:50AM +0100, Pavel Stehule wrote:\n> >\n> > fresh rebase\n>\n> I'm continuing to review, this time going through shadowing stuff in\n> transformColumnRef, IdentifyVariable etc. Well, that's a lot of leg work\n> for rather little outcome :) I guess all attempts to simplify this part\n> weren't successful?\n>\n\nOriginally I wrote it in the strategy \"reduce false alarms\". But when I\nthink about it, it may not be good in this case. Usually the changes are\ndone first on some developer environment, and good practice is to disallow\nsame (possibly confusing) identifiers. So I am not against making this\nwarning more aggressive with some possibility of false alarms. With\nblocking reduction of alarms the differences in regress was zero. So I\nreduced this part.\n\n\n\n>\n> Couple of questions to it. In IdentifyVariable in the branch handling\n> two values the commentary says:\n>\n> /*\n> * a.b can mean \"schema\".\"variable\" or \"variable\".\"field\",\n> * Check both variants, and returns InvalidOid with\n> * not_unique flag, when both interpretations are\n> * possible. Second node can be star. In this case, the\n> * only allowed possibility is \"variable\".\"*\".\n> */\n>\n> I read this as \"variable\".\"*\" is a valid combination, but the very next\n> part of this condition says differently:\n>\n\n\n\n>\n> /*\n> * Session variables doesn't support unboxing by star\n> * syntax. But this syntax have to be calculated here,\n> * because can come from non session variables related\n> * expressions.\n> */\n> Assert(IsA(field2, A_Star));\n>\n> Is the first commentary not quite correct?\n>\n\nI think it is correct, but maybe I was not good at describing this issue.\nThe sentence \"Second node can be a star. In this case, the\nthe only allowed possibility is \"variable\".\"*\".\" should be in the next\ncomment.\n\nIn this part we process a list of identifiers, and we try to map these\nidentifiers to some semantics. The parser should ensure that\nall fields of lists are strings or the last field is a star. In this case\nthe semantic \"schema\".* is nonsense, and the only possible semantic\nis \"variable\".*. It is valid semantics, but unsupported now. Unboxing is\navailable by syntax (var).*\n\nI changed the comment\n\n\n\n>\n> Another question about how shadowing warning should work between\n> namespaces.\n> Let's say I've got two namespaces, public and test, both have a session\n> variable with the same name, but only one has a table with the same name:\n>\n> -- in public\n> create table test_agg(a int);\n> create type for_test_agg as (a int);\n> create variable test_agg for_test_agg;\n>\n> -- in test\n> create type for_test_agg as (a int);\n> create variable test_agg for_test_agg;\n>\n> Now if we will try to trigger the shadowing warning from public\n> namespace, it would work differently:\n>\n> -- in public\n> =# let test.test_agg.a = 10;\n> =# let test_agg.a = 20;\n> =# set session_variables_ambiguity_warning to on;\n>\n> -- note the value returned from the table\n> =# select jsonb_agg(test_agg.a) from test_agg;\n> WARNING: 42702: session variable \"test_agg.a\" is shadowed\n> LINE 1: select jsonb_agg(test_agg.a) from test_agg;\n> ^\n> DETAIL: Session variables can be shadowed by columns, routine's\n> variables and routine's arguments with the same name.\n> LOCATION: transformColumnRef, parse_expr.c:940\n> jsonb_agg\n> -----------\n> [1]\n>\n> -- no warning, note the session variable value\n> =# select jsonb_agg(test.test_agg.a) from test_agg;\n> jsonb_agg\n> -----------\n> [10]\n>\n> It happens because in the second scenario the logic inside\n> transformColumnRef\n> will not set up the node variable (there is no corresponding table in the\n> \"test\" schema), and the following conditions covering session variables\n> shadowing are depending on it. Is it supposed to be like this?\n>\n\nI am sorry, I don't understand what you want to describe. Session variables\nare shadowed by relations, ever. It is design. In the first case, the\nvariable is shadowed and a warning is raised. In the second case,\n\"test\".\"test_agg\".\"a\" is a fully unique qualified identifier, and then the\nvariable is used, and then it is not shadowed.\n\nupdated patches attached\n\nRegards\n\nPavel", "msg_date": "Wed, 8 Mar 2023 08:31:07 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "> On Wed, Mar 08, 2023 at 08:31:07AM +0100, Pavel Stehule wrote:\n> p� 3. 3. 2023 v 21:19 odes�latel Dmitry Dolgov <9erthalion6@gmail.com>\n> napsal:\n>\n> > > On Tue, Feb 28, 2023 at 06:12:50AM +0100, Pavel Stehule wrote:\n> > >\n> > > fresh rebase\n> >\n> > I'm continuing to review, this time going through shadowing stuff in\n> > transformColumnRef, IdentifyVariable etc. Well, that's a lot of leg work\n> > for rather little outcome :) I guess all attempts to simplify this part\n> > weren't successful?\n> >\n>\n> Originally I wrote it in the strategy \"reduce false alarms\". But when I\n> think about it, it may not be good in this case. Usually the changes are\n> done first on some developer environment, and good practice is to disallow\n> same (possibly confusing) identifiers. So I am not against making this\n> warning more aggressive with some possibility of false alarms. With\n> blocking reduction of alarms the differences in regress was zero. So I\n> reduced this part.\n\nGreat, thank you.\n\n> > Couple of questions to it. In IdentifyVariable in the branch handling\n> > two values the commentary says:\n> >\n> > /*\n> > * a.b can mean \"schema\".\"variable\" or \"variable\".\"field\",\n> > * Check both variants, and returns InvalidOid with\n> > * not_unique flag, when both interpretations are\n> > * possible. Second node can be star. In this case, the\n> > * only allowed possibility is \"variable\".\"*\".\n> > */\n> >\n> > I read this as \"variable\".\"*\" is a valid combination, but the very next\n> > part of this condition says differently:\n> >\n>\n>\n>\n> >\n> > /*\n> > * Session variables doesn't support unboxing by star\n> > * syntax. But this syntax have to be calculated here,\n> > * because can come from non session variables related\n> > * expressions.\n> > */\n> > Assert(IsA(field2, A_Star));\n> >\n> > Is the first commentary not quite correct?\n> >\n>\n> I think it is correct, but maybe I was not good at describing this issue.\n> The sentence \"Second node can be a star. In this case, the\n> the only allowed possibility is \"variable\".\"*\".\" should be in the next\n> comment.\n>\n> In this part we process a list of identifiers, and we try to map these\n> identifiers to some semantics. The parser should ensure that\n> all fields of lists are strings or the last field is a star. In this case\n> the semantic \"schema\".* is nonsense, and the only possible semantic\n> is \"variable\".*. It is valid semantics, but unsupported now. Unboxing is\n> available by syntax (var).*\n>\n> I changed the comment\n\nThanks. Just to clarify, by \"unsupported\" you mean unsupported in the\ncurrent patch implementation right? From what I understand value\nunboxing could be done without parentheses in a non-top level select\nquery.\n\nAs a side note, I'm not sure if this branch is exercised in any tests.\nI've replaced returning InvalidOid with actually doing LookupVariable(NULL, a, true)\nin this case, and all the tests are still passing.\n\n> > Another question about how shadowing warning should work between\n> > namespaces.\n> > Let's say I've got two namespaces, public and test, both have a session\n> > variable with the same name, but only one has a table with the same name:\n> >\n> > -- in public\n> > create table test_agg(a int);\n> > create type for_test_agg as (a int);\n> > create variable test_agg for_test_agg;\n> >\n> > -- in test\n> > create type for_test_agg as (a int);\n> > create variable test_agg for_test_agg;\n> >\n> > Now if we will try to trigger the shadowing warning from public\n> > namespace, it would work differently:\n> >\n> > -- in public\n> > =# let test.test_agg.a = 10;\n> > =# let test_agg.a = 20;\n> > =# set session_variables_ambiguity_warning to on;\n> >\n> > -- note the value returned from the table\n> > =# select jsonb_agg(test_agg.a) from test_agg;\n> > WARNING: 42702: session variable \"test_agg.a\" is shadowed\n> > LINE 1: select jsonb_agg(test_agg.a) from test_agg;\n> > ^\n> > DETAIL: Session variables can be shadowed by columns, routine's\n> > variables and routine's arguments with the same name.\n> > LOCATION: transformColumnRef, parse_expr.c:940\n> > jsonb_agg\n> > -----------\n> > [1]\n> >\n> > -- no warning, note the session variable value\n> > =# select jsonb_agg(test.test_agg.a) from test_agg;\n> > jsonb_agg\n> > -----------\n> > [10]\n> >\n> > It happens because in the second scenario the logic inside\n> > transformColumnRef\n> > will not set up the node variable (there is no corresponding table in the\n> > \"test\" schema), and the following conditions covering session variables\n> > shadowing are depending on it. Is it supposed to be like this?\n> >\n>\n> I am sorry, I don't understand what you want to describe. Session variables\n> are shadowed by relations, ever. It is design. In the first case, the\n> variable is shadowed and a warning is raised. In the second case,\n> \"test\".\"test_agg\".\"a\" is a fully unique qualified identifier, and then the\n> variable is used, and then it is not shadowed.\n\nYeah, there was a misunderstanding on my side, sorry. For whatever\nreason I thought shadowing between schemas is a reasonable thing, but as\nyou pointed out it doesn't really make sense.\n\n\n", "msg_date": "Wed, 8 Mar 2023 16:33:49 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "st 8. 3. 2023 v 16:35 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\nnapsal:\n\n> > On Wed, Mar 08, 2023 at 08:31:07AM +0100, Pavel Stehule wrote:\n> > pá 3. 3. 2023 v 21:19 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\n> > napsal:\n> >\n> > > > On Tue, Feb 28, 2023 at 06:12:50AM +0100, Pavel Stehule wrote:\n> > > >\n> > > > fresh rebase\n> > >\n> > > I'm continuing to review, this time going through shadowing stuff in\n> > > transformColumnRef, IdentifyVariable etc. Well, that's a lot of leg\n> work\n> > > for rather little outcome :) I guess all attempts to simplify this part\n> > > weren't successful?\n> > >\n> >\n> > Originally I wrote it in the strategy \"reduce false alarms\". But when I\n> > think about it, it may not be good in this case. Usually the changes are\n> > done first on some developer environment, and good practice is to\n> disallow\n> > same (possibly confusing) identifiers. So I am not against making this\n> > warning more aggressive with some possibility of false alarms. With\n> > blocking reduction of alarms the differences in regress was zero. So I\n> > reduced this part.\n>\n> Great, thank you.\n>\n> > > Couple of questions to it. In IdentifyVariable in the branch handling\n> > > two values the commentary says:\n> > >\n> > > /*\n> > > * a.b can mean \"schema\".\"variable\" or \"variable\".\"field\",\n> > > * Check both variants, and returns InvalidOid with\n> > > * not_unique flag, when both interpretations are\n> > > * possible. Second node can be star. In this case, the\n> > > * only allowed possibility is \"variable\".\"*\".\n> > > */\n> > >\n> > > I read this as \"variable\".\"*\" is a valid combination, but the very next\n> > > part of this condition says differently:\n> > >\n> >\n> >\n> >\n> > >\n> > > /*\n> > > * Session variables doesn't support unboxing by star\n> > > * syntax. But this syntax have to be calculated here,\n> > > * because can come from non session variables related\n> > > * expressions.\n> > > */\n> > > Assert(IsA(field2, A_Star));\n> > >\n> > > Is the first commentary not quite correct?\n> > >\n> >\n> > I think it is correct, but maybe I was not good at describing this issue.\n> > The sentence \"Second node can be a star. In this case, the\n> > the only allowed possibility is \"variable\".\"*\".\" should be in the next\n> > comment.\n> >\n> > In this part we process a list of identifiers, and we try to map these\n> > identifiers to some semantics. The parser should ensure that\n> > all fields of lists are strings or the last field is a star. In this case\n> > the semantic \"schema\".* is nonsense, and the only possible semantic\n> > is \"variable\".*. It is valid semantics, but unsupported now. Unboxing is\n> > available by syntax (var).*\n> >\n> > I changed the comment\n>\n> Thanks. Just to clarify, by \"unsupported\" you mean unsupported in the\n> current patch implementation right? From what I understand value\n> unboxing could be done without parentheses in a non-top level select\n> query.\n>\n\nYes, it can be implemented in the next steps. I don't think there can be\nsome issues, but it means more lines and a little bit more complex\ninterface.\nIn this step, I try to implement minimalistic required functionality that\ncan be enhanced in next steps. For this area is an important fact, so\nsession variables\nwill be shadowed always by relations. It means new functionality in session\nvariables cannot break existing applications ever, and then there is space\nfor future enhancement.\n\n\n>\n> As a side note, I'm not sure if this branch is exercised in any tests.\n> I've replaced returning InvalidOid with actually doing\n> LookupVariable(NULL, a, true)\n> in this case, and all the tests are still passing.\n>\n\nUsually we don't test not yet implemented functionality.\n\n\n>\n> > > Another question about how shadowing warning should work between\n> > > namespaces.\n> > > Let's say I've got two namespaces, public and test, both have a session\n> > > variable with the same name, but only one has a table with the same\n> name:\n> > >\n> > > -- in public\n> > > create table test_agg(a int);\n> > > create type for_test_agg as (a int);\n> > > create variable test_agg for_test_agg;\n> > >\n> > > -- in test\n> > > create type for_test_agg as (a int);\n> > > create variable test_agg for_test_agg;\n> > >\n> > > Now if we will try to trigger the shadowing warning from public\n> > > namespace, it would work differently:\n> > >\n> > > -- in public\n> > > =# let test.test_agg.a = 10;\n> > > =# let test_agg.a = 20;\n> > > =# set session_variables_ambiguity_warning to on;\n> > >\n> > > -- note the value returned from the table\n> > > =# select jsonb_agg(test_agg.a) from test_agg;\n> > > WARNING: 42702: session variable \"test_agg.a\" is shadowed\n> > > LINE 1: select jsonb_agg(test_agg.a) from test_agg;\n> > > ^\n> > > DETAIL: Session variables can be shadowed by columns,\n> routine's\n> > > variables and routine's arguments with the same name.\n> > > LOCATION: transformColumnRef, parse_expr.c:940\n> > > jsonb_agg\n> > > -----------\n> > > [1]\n> > >\n> > > -- no warning, note the session variable value\n> > > =# select jsonb_agg(test.test_agg.a) from test_agg;\n> > > jsonb_agg\n> > > -----------\n> > > [10]\n> > >\n> > > It happens because in the second scenario the logic inside\n> > > transformColumnRef\n> > > will not set up the node variable (there is no corresponding table in\n> the\n> > > \"test\" schema), and the following conditions covering session variables\n> > > shadowing are depending on it. Is it supposed to be like this?\n> > >\n> >\n> > I am sorry, I don't understand what you want to describe. Session\n> variables\n> > are shadowed by relations, ever. It is design. In the first case, the\n> > variable is shadowed and a warning is raised. In the second case,\n> > \"test\".\"test_agg\".\"a\" is a fully unique qualified identifier, and then\n> the\n> > variable is used, and then it is not shadowed.\n>\n> Yeah, there was a misunderstanding on my side, sorry. For whatever\n> reason I thought shadowing between schemas is a reasonable thing, but as\n> you pointed out it doesn't really make sense.\n>\n\nyes. Thinking about this question is not trivial. There are more dimensions\n- like search path setting, catalog name, possible three fields identifier,\npossible collisions between variable and variable, and between variable and\nrelation. But current design can work I think. Still it is strong enough,\nand it is simplified against start design.\n\n\n\nRegards\n\nPavel\n\nst 8. 3. 2023 v 16:35 odesílatel Dmitry Dolgov <9erthalion6@gmail.com> napsal:> On Wed, Mar 08, 2023 at 08:31:07AM +0100, Pavel Stehule wrote:\n> pá 3. 3. 2023 v 21:19 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\n> napsal:\n>\n> > > On Tue, Feb 28, 2023 at 06:12:50AM +0100, Pavel Stehule wrote:\n> > >\n> > > fresh rebase\n> >\n> > I'm continuing to review, this time going through shadowing stuff in\n> > transformColumnRef, IdentifyVariable etc. Well, that's a lot of leg work\n> > for rather little outcome :) I guess all attempts to simplify this part\n> > weren't successful?\n> >\n>\n> Originally I wrote it in the strategy \"reduce false alarms\". But when I\n> think about it, it may not be good in this case. Usually the changes are\n> done first on some developer environment, and good practice is to disallow\n> same (possibly confusing) identifiers. So I am not against making this\n> warning more aggressive with some possibility of false alarms.  With\n> blocking reduction of alarms the differences in regress was zero. So I\n> reduced this part.\n\nGreat, thank you.\n\n> > Couple of questions to it. In IdentifyVariable in the branch handling\n> > two values the commentary says:\n> >\n> >     /*\n> >      * a.b can mean \"schema\".\"variable\" or \"variable\".\"field\",\n> >      * Check both variants, and returns InvalidOid with\n> >      * not_unique flag, when both interpretations are\n> >      * possible. Second node can be star. In this case, the\n> >      * only allowed possibility is \"variable\".\"*\".\n> >      */\n> >\n> > I read this as \"variable\".\"*\" is a valid combination, but the very next\n> > part of this condition says differently:\n> >\n>\n>\n>\n> >\n> >     /*\n> >      * Session variables doesn't support unboxing by star\n> >      * syntax. But this syntax have to be calculated here,\n> >      * because can come from non session variables related\n> >      * expressions.\n> >      */\n> >     Assert(IsA(field2, A_Star));\n> >\n> > Is the first commentary not quite correct?\n> >\n>\n> I think it is correct, but maybe I was not good at describing this issue.\n> The sentence \"Second node can be a star. In this case, the\n> the only allowed possibility is \"variable\".\"*\".\" should be in the next\n> comment.\n>\n> In this part we process a list of identifiers, and we try to map these\n> identifiers to some semantics. The parser should ensure that\n> all fields of lists are strings or the last field is a star. In this case\n> the semantic \"schema\".* is nonsense, and the only possible semantic\n> is \"variable\".*. It is valid semantics, but unsupported now. Unboxing is\n> available by syntax (var).*\n>\n> I changed the comment\n\nThanks. Just to clarify, by \"unsupported\" you mean unsupported in the\ncurrent patch implementation right? From what I understand value\nunboxing could be done without parentheses in a non-top level select\nquery.Yes, it can be implemented in the next steps. I don't think there can be some issues, but it means more lines and a little bit more complex interface.In this step, I try to implement minimalistic required functionality that can be enhanced in next steps. For this area is an important fact, so session variableswill be shadowed always by relations. It means new functionality in session variables cannot break existing applications ever, and then there is spacefor future enhancement. \n\nAs a side note, I'm not sure if this branch is exercised in any tests.\nI've replaced returning InvalidOid with actually doing LookupVariable(NULL, a, true)\nin this case, and all the tests are still passing.Usually we don't test not yet implemented functionality.  \n\n> > Another question about how shadowing warning should work between\n> > namespaces.\n> > Let's say I've got two namespaces, public and test, both have a session\n> > variable with the same name, but only one has a table with the same name:\n> >\n> >     -- in public\n> >     create table test_agg(a int);\n> >     create type for_test_agg as (a int);\n> >     create variable test_agg for_test_agg;\n> >\n> >     -- in test\n> >     create type for_test_agg as (a int);\n> >     create variable test_agg for_test_agg;\n> >\n> > Now if we will try to trigger the shadowing warning from public\n> > namespace, it would work differently:\n> >\n> >     -- in public\n> >     =# let test.test_agg.a = 10;\n> >     =# let test_agg.a = 20;\n> >     =# set session_variables_ambiguity_warning to on;\n> >\n> >         -- note the value returned from the table\n> >         =# select jsonb_agg(test_agg.a) from test_agg;\n> >         WARNING:  42702: session variable \"test_agg.a\" is shadowed\n> >         LINE 1: select jsonb_agg(test_agg.a) from test_agg;\n> >                                                          ^\n> >         DETAIL:  Session variables can be shadowed by columns, routine's\n> > variables and routine's arguments with the same name.\n> >         LOCATION:  transformColumnRef, parse_expr.c:940\n> >          jsonb_agg\n> >         -----------\n> >          [1]\n> >\n> >         -- no warning, note the session variable value\n> >         =# select jsonb_agg(test.test_agg.a) from test_agg;\n> >          jsonb_agg\n> >         -----------\n> >          [10]\n> >\n> > It happens because in the second scenario the logic inside\n> > transformColumnRef\n> > will not set up the node variable (there is no corresponding table in the\n> > \"test\" schema), and the following conditions covering session variables\n> > shadowing are depending on it. Is it supposed to be like this?\n> >\n>\n> I am sorry, I don't understand what you want to describe. Session variables\n> are shadowed by relations, ever. It is design. In the first case, the\n> variable is shadowed and a warning is raised. In the second case,\n> \"test\".\"test_agg\".\"a\" is a fully unique qualified identifier, and then the\n> variable is used, and then it is not shadowed.\n\nYeah, there was a misunderstanding on my side, sorry. For whatever\nreason I thought shadowing between schemas is a reasonable thing, but as\nyou pointed out it doesn't really make sense.yes. Thinking about this question is not trivial. There are more dimensions - like search path setting, catalog name, possible three fields identifier, possible collisions between variable and variable, and between variable and relation. But current design can work I think. Still it is strong enough, and it is simplified against start design. RegardsPavel", "msg_date": "Wed, 8 Mar 2023 17:07:37 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nrebase + fix-update pg_dump tests\n\nRegards\n\nPavel", "msg_date": "Fri, 17 Mar 2023 21:50:09 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On 17.03.23 21:50, Pavel Stehule wrote:\n> rebase + fix-update pg_dump tests\n\nI have a few comments on the code:\n\n0001\n\nExecGrant_Variable() could probably use ExecGrant_common().\n\nThe additions to syscache.c should be formatted to the new style.\n\nin pg_variable.h:\n\n- create_lsn ought to have a \"var\" prefix.\n\n- typo: \"typmode for variable's type\"\n\n- What is the purpose of struct Variable? It seems very similar to\n FormData_pg_variable. At least a comment would be useful.\n\nPreserve the trailing comma in ParseExprKind.\n\n\n0002\n\nexpr_kind_allows_session_variables() should have some explanation\nabout criteria for determining which expression kinds should allow\nvariables.\n\nUsually, we handle EXPR_KIND_* switches without default case, so we\nget notified what needs to be changed if a new enum symbol is added.\n\n\n0010\n\nThe material from the tutorial (advanced.sgml) might be better in\nddl.sgml.\n\nIn catalogs.sgml, the columns don't match the ones actually defined in\npg_variable.h in patch 0001 (e.g., create_lsn is missing and the order\ndoesn't match).\n\n(The order of columns in pg_variable.h didn't immediately make sense to \nme either, so maybe there is a middle ground to be found.)\n\nsession_variables_ambiguity_warning: There needs to be more\ninformation about this. The current explanation is basically just,\n\"warn if your query is confusing\". Why do I want that? Why would I\nnot want that? What is the alternative? What are some examples?\nShouldn't there be a standard behavior without a need to configure\nanything?\n\nIn allfiles.sgml, dropVariable should be before dropView.\n\n\n\n", "msg_date": "Tue, 21 Mar 2023 17:18:53 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On 17.03.23 21:50, Pavel Stehule wrote:\n> Hi\n> \n> rebase + fix-update pg_dump tests\n> \n> Regards\n> \n> Pavel\n> \n\nI have spent several hours studying the code and the past discussions.\n\nThe problem I see in general is that everyone who reviews and tests the \npatches finds more problems, behavioral, weird internal errors, crashes. \n These are then immediately fixed, and the cycle starts again. I don't \nhave the sense that this process has arrived at a steady state yet.\n\nThe other issue is that by its nature this patch adds a lot of code in a \nlot of places. Large patches are more likely to be successful if they \nadd a lot of code in one place or smaller amounts of code in a lot of \nplaces. But this patch does both and it's just overwhelming. There is \nso much new internal functionality and terminology. Variables can be \ncreated, registered, initialized, stored, copied, prepared, set, freed, \nremoved, released, synced, dropped, and more. I don't know if anyone \nhas actually reviewed all that in detail.\n\nHas any effort been made to make this simpler, smaller, reduce scope, \nrefactoring, find commonalities with other features, try to manage the \ncomplexity somehow?\n\nI'm not making a comment on the details of the functionality itself. I \njust think on the coding level it's not gotten to a satisfying situation \nyet.\n\n\n\n", "msg_date": "Thu, 23 Mar 2023 16:33:13 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\n\nčt 23. 3. 2023 v 16:33 odesílatel Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> napsal:\n\n> On 17.03.23 21:50, Pavel Stehule wrote:\n> > Hi\n> >\n> > rebase + fix-update pg_dump tests\n> >\n> > Regards\n> >\n> > Pavel\n> >\n>\n> I have spent several hours studying the code and the past discussions.\n>\n> The problem I see in general is that everyone who reviews and tests the\n> patches finds more problems, behavioral, weird internal errors, crashes.\n> These are then immediately fixed, and the cycle starts again. I don't\n> have the sense that this process has arrived at a steady state yet.\n>\n> The other issue is that by its nature this patch adds a lot of code in a\n> lot of places. Large patches are more likely to be successful if they\n> add a lot of code in one place or smaller amounts of code in a lot of\n> places. But this patch does both and it's just overwhelming. There is\n> so much new internal functionality and terminology. Variables can be\n> created, registered, initialized, stored, copied, prepared, set, freed,\n> removed, released, synced, dropped, and more. I don't know if anyone\n> has actually reviewed all that in detail.\n>\n> Has any effort been made to make this simpler, smaller, reduce scope,\n> refactoring, find commonalities with other features, try to manage the\n> complexity somehow?\n>\n> I'm not making a comment on the details of the functionality itself. I\n> just think on the coding level it's not gotten to a satisfying situation\n> yet.\n>\n>\nI agree that this patch is large, but almost all code is simple. Complex\ncode is \"only\" in 0002-session-variables.patch (113KB/438KB).\n\nNow, I have no idea how the functionality can be sensibly reduced or\ndivided (no without significant performance loss). I see two difficult\npoints in this code:\n\n1. when to clean memory. The code implements cleaning very accurately - and\nthis is unique in Postgres. Partially I implement some functionality of\nstorage manager. Probably no code from Postgres can be reused, because\nthere is not any support for global temporary objects. Cleaning based on\nsinval messages processing is difficult, but there is nothing else. The\ncode is a little bit more complex, because there are three types of session\nvariables: a) session variables, b) temp session variables, c) session\nvariables with transaction scope. Maybe @c can be removed, and maybe we\ndon't need to support not null default (this can simplify initialization).\nWhat do you think about it?\n\n2. how to pass a variable's value to the executor. The implementation is\nbased on extending the Param node, but it cannot reuse query params buffers\nand implements own.\nBut it is hard to simplify code, because we want to support usage variables\nin queries, and usage in PL/pgSQL expressions too. And both are processed\ndifferently.\n\nRegards\n\nPavel\n\nHičt 23. 3. 2023 v 16:33 odesílatel Peter Eisentraut <peter.eisentraut@enterprisedb.com> napsal:On 17.03.23 21:50, Pavel Stehule wrote:\n> Hi\n> \n> rebase + fix-update pg_dump tests\n> \n> Regards\n> \n> Pavel\n> \n\nI have spent several hours studying the code and the past discussions.\n\nThe problem I see in general is that everyone who reviews and tests the \npatches finds more problems, behavioral, weird internal errors, crashes. \n  These are then immediately fixed, and the cycle starts again.  I don't \nhave the sense that this process has arrived at a steady state yet.\n\nThe other issue is that by its nature this patch adds a lot of code in a \nlot of places.  Large patches are more likely to be successful if they \nadd a lot of code in one place or smaller amounts of code in a lot of \nplaces.  But this patch does both and it's just overwhelming.  There is \nso much new internal functionality and terminology.  Variables can be \ncreated, registered, initialized, stored, copied, prepared, set, freed, \nremoved, released, synced, dropped, and more.  I don't know if anyone \nhas actually reviewed all that in detail.\n\nHas any effort been made to make this simpler, smaller, reduce scope, \nrefactoring, find commonalities with other features, try to manage the \ncomplexity somehow?\n\nI'm not making a comment on the details of the functionality itself.  I \njust think on the coding level it's not gotten to a satisfying situation \nyet.\nI agree that this patch is large, but almost all code is simple. Complex code is \"only\" in 0002-session-variables.patch (113KB/438KB).Now, I have no idea how the functionality can be sensibly reduced or divided (no without significant performance loss). I see two difficult points in this code:1. when to clean memory. The code implements cleaning very accurately - and this is unique in Postgres. Partially I implement some functionality of storage manager. Probably no code from Postgres can be reused, because there is not any support for global temporary objects. Cleaning based on sinval messages processing is difficult, but there is nothing else.  The code is a little bit more complex, because there are three types of session variables: a) session variables, b) temp session variables, c) session variables with transaction scope. Maybe @c can be removed, and maybe we don't need to support not null default (this can simplify initialization). What do you think about it?2. how to pass a variable's value to the executor. The implementation is based on extending the Param node, but it cannot reuse query params buffers and implements own.But it is hard to simplify code, because we want to support usage variables in queries, and usage in PL/pgSQL expressions too. And both are processed differently. RegardsPavel", "msg_date": "Thu, 23 Mar 2023 19:54:14 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "čt 23. 3. 2023 v 19:54 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n>\n> čt 23. 3. 2023 v 16:33 odesílatel Peter Eisentraut <\n> peter.eisentraut@enterprisedb.com> napsal:\n>\n>> On 17.03.23 21:50, Pavel Stehule wrote:\n>> > Hi\n>> >\n>> > rebase + fix-update pg_dump tests\n>> >\n>> > Regards\n>> >\n>> > Pavel\n>> >\n>>\n>> I have spent several hours studying the code and the past discussions.\n>>\n>> The problem I see in general is that everyone who reviews and tests the\n>> patches finds more problems, behavioral, weird internal errors, crashes.\n>> These are then immediately fixed, and the cycle starts again. I don't\n>> have the sense that this process has arrived at a steady state yet.\n>>\n>> The other issue is that by its nature this patch adds a lot of code in a\n>> lot of places. Large patches are more likely to be successful if they\n>> add a lot of code in one place or smaller amounts of code in a lot of\n>> places. But this patch does both and it's just overwhelming. There is\n>> so much new internal functionality and terminology. Variables can be\n>> created, registered, initialized, stored, copied, prepared, set, freed,\n>> removed, released, synced, dropped, and more. I don't know if anyone\n>> has actually reviewed all that in detail.\n>>\n>> Has any effort been made to make this simpler, smaller, reduce scope,\n>> refactoring, find commonalities with other features, try to manage the\n>> complexity somehow?\n>>\n>> I'm not making a comment on the details of the functionality itself. I\n>> just think on the coding level it's not gotten to a satisfying situation\n>> yet.\n>>\n>>\n> I agree that this patch is large, but almost all code is simple. Complex\n> code is \"only\" in 0002-session-variables.patch (113KB/438KB).\n>\n> Now, I have no idea how the functionality can be sensibly reduced or\n> divided (no without significant performance loss). I see two difficult\n> points in this code:\n>\n> 1. when to clean memory. The code implements cleaning very accurately -\n> and this is unique in Postgres. Partially I implement some functionality of\n> storage manager. Probably no code from Postgres can be reused, because\n> there is not any support for global temporary objects. Cleaning based on\n> sinval messages processing is difficult, but there is nothing else. The\n> code is a little bit more complex, because there are three types of session\n> variables: a) session variables, b) temp session variables, c) session\n> variables with transaction scope. Maybe @c can be removed, and maybe we\n> don't need to support not null default (this can simplify initialization).\n> What do you think about it?\n>\n> 2. how to pass a variable's value to the executor. The implementation is\n> based on extending the Param node, but it cannot reuse query params buffers\n> and implements own.\n> But it is hard to simplify code, because we want to support usage\n> variables in queries, and usage in PL/pgSQL expressions too. And both are\n> processed differently.\n>\n\nMaybe I can divide the patch 0002-session-variables to three sections -\nrelated to memory management, planning and execution?\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n>\n\nčt 23. 3. 2023 v 19:54 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:Hičt 23. 3. 2023 v 16:33 odesílatel Peter Eisentraut <peter.eisentraut@enterprisedb.com> napsal:On 17.03.23 21:50, Pavel Stehule wrote:\n> Hi\n> \n> rebase + fix-update pg_dump tests\n> \n> Regards\n> \n> Pavel\n> \n\nI have spent several hours studying the code and the past discussions.\n\nThe problem I see in general is that everyone who reviews and tests the \npatches finds more problems, behavioral, weird internal errors, crashes. \n  These are then immediately fixed, and the cycle starts again.  I don't \nhave the sense that this process has arrived at a steady state yet.\n\nThe other issue is that by its nature this patch adds a lot of code in a \nlot of places.  Large patches are more likely to be successful if they \nadd a lot of code in one place or smaller amounts of code in a lot of \nplaces.  But this patch does both and it's just overwhelming.  There is \nso much new internal functionality and terminology.  Variables can be \ncreated, registered, initialized, stored, copied, prepared, set, freed, \nremoved, released, synced, dropped, and more.  I don't know if anyone \nhas actually reviewed all that in detail.\n\nHas any effort been made to make this simpler, smaller, reduce scope, \nrefactoring, find commonalities with other features, try to manage the \ncomplexity somehow?\n\nI'm not making a comment on the details of the functionality itself.  I \njust think on the coding level it's not gotten to a satisfying situation \nyet.\nI agree that this patch is large, but almost all code is simple. Complex code is \"only\" in 0002-session-variables.patch (113KB/438KB).Now, I have no idea how the functionality can be sensibly reduced or divided (no without significant performance loss). I see two difficult points in this code:1. when to clean memory. The code implements cleaning very accurately - and this is unique in Postgres. Partially I implement some functionality of storage manager. Probably no code from Postgres can be reused, because there is not any support for global temporary objects. Cleaning based on sinval messages processing is difficult, but there is nothing else.  The code is a little bit more complex, because there are three types of session variables: a) session variables, b) temp session variables, c) session variables with transaction scope. Maybe @c can be removed, and maybe we don't need to support not null default (this can simplify initialization). What do you think about it?2. how to pass a variable's value to the executor. The implementation is based on extending the Param node, but it cannot reuse query params buffers and implements own.But it is hard to simplify code, because we want to support usage variables in queries, and usage in PL/pgSQL expressions too. And both are processed differently. Maybe I can divide the  patch 0002-session-variables to three sections - related to memory management, planning and execution?RegardsPavelRegardsPavel", "msg_date": "Fri, 24 Mar 2023 08:04:08 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "út 21. 3. 2023 v 17:18 odesílatel Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> napsal:\n\n> On 17.03.23 21:50, Pavel Stehule wrote:\n> > rebase + fix-update pg_dump tests\n>\n> I have a few comments on the code:\n>\n> 0001\n>\n> ExecGrant_Variable() could probably use ExecGrant_common().\n>\n\ndone\n\n\n> The additions to syscache.c should be formatted to the new style.\n>\n\ndone\n\n\n>\n> in pg_variable.h:\n>\n\n\n\n>\n> - create_lsn ought to have a \"var\" prefix.\n>\n\nchanged\n\n\n>\n> - typo: \"typmode for variable's type\"\n>\n\nfixed\n\n\n>\n> - What is the purpose of struct Variable? It seems very similar to\n> FormData_pg_variable. At least a comment would be useful.\n>\n\nI wrote comment there:\n\n\n/*\n * The Variable struct is based on FormData_pg_variable struct. Against\n * FormData_pg_variable it can hold node of deserialized expression used\n * for calculation of default value.\n */\n\n>\n>\n> Preserve the trailing comma in ParseExprKind.\n>\n\ndone\n\n\n>\n>\n> 0002\n>\n> expr_kind_allows_session_variables() should have some explanation\n> about criteria for determining which expression kinds should allow\n> variables.\n>\n\nI wrote comment there:\n\n /*\n * Returns true, when expression of kind allows using of\n * session variables.\n+ *\n+ * The session's variables can be used everywhere where\n+ * can be used external parameters. Session variables\n+ * are not allowed in DDL. Session's variables cannot be\n+ * used in constraints.\n+ *\n+ * The identifier can be parsed as an session variable\n+ * only in expression's kinds where session's variables\n+ * are allowed. This is the primary usage of this function.\n+ *\n+ * Second usage of this function is for decision if\n+ * an error message \"column does not exist\" or \"column\n+ * or variable does not exist\" should be printed. When\n+ * we are in expression, where session variables cannot\n+ * be used, we raise the first form or error message.\n */\n\n\n> Usually, we handle EXPR_KIND_* switches without default case, so we\n> get notified what needs to be changed if a new enum symbol is added.\n>\n\ndone\n\n\n>\n>\n> 0010\n>\n> The material from the tutorial (advanced.sgml) might be better in\n> ddl.sgml.\n>\n\nmoved\n\n\n>\n> In catalogs.sgml, the columns don't match the ones actually defined in\n> pg_variable.h in patch 0001 (e.g., create_lsn is missing and the order\n> doesn't match).\n>\n\nfixed\n\n\n\n>\n> (The order of columns in pg_variable.h didn't immediately make sense to\n> me either, so maybe there is a middle ground to be found.)\n>\n\nreordered. Still varcreate_lsn should be before varname column, because\nsanity check:\n\n--\n-- When ALIGNOF_DOUBLE==4 (e.g. AIX), the C ABI may impose 8-byte alignment\non\n-- some of the C types that correspond to TYPALIGN_DOUBLE SQL types. To\nensure\n-- catalog C struct layout matches catalog tuple layout, arrange for the\ntuple\n-- offset of each fixed-width, attalign='d' catalog column to be divisible\nby 8\n-- unconditionally. Keep such columns before the first NameData column of\nthe\n-- catalog, since packagers can override NAMEDATALEN to an odd number.\n\n\n\n>\n> session_variables_ambiguity_warning: There needs to be more\n> information about this. The current explanation is basically just,\n> \"warn if your query is confusing\". Why do I want that? Why would I\n> not want that? What is the alternative? What are some examples?\n> Shouldn't there be a standard behavior without a need to configure\n> anything?\n>\n\nI enhanced this entry:\n\n+ <para>\n+ The session variables can be shadowed by column references in a\nquery. This\n+ is an expected feature. The existing queries should not be broken\nby creating\n+ any session variable, because session variables are shadowed\nalways if the\n+ identifier is ambiguous. The variables should be named without\npossibility\n+ to collision with identifiers of other database objects (column\nnames or\n+ record field names). The warnings enabled by setting\n<varname>session_variables_ambiguity_warning</varname>\n+ should help with finding identifier's collisions.\n+<programlisting>\n+CREATE TABLE foo(a int);\n+INSERT INTO foo VALUES(10);\n+CREATE VARIABLE a int;\n+LET a = 100;\n+SELECT a FROM foo;\n+</programlisting>\n+\n+<screen>\n+ a\n+----\n+ 10\n+(1 row)\n+</screen>\n+\n+<programlisting>\n+SET session_variables_ambiguity_warning TO on;\n+SELECT a FROM foo;\n+</programlisting>\n+\n+<screen>\n+WARNING: session variable \"a\" is shadowed\n+LINE 1: SELECT a FROM foo;\n+ ^\n+DETAIL: Session variables can be shadowed by columns, routine's variables\nand routine's arguments with the same name.\n+ a\n+----\n+ 10\n+(1 row)\n+</screen>\n+ </para>\n+ <para>\n+ This feature can significantly increase size of logs, and then it\nis\n+ disabled by default, but for testing or development environments it\n+ should be enabled.\n\n\n\n>\n> In allfiles.sgml, dropVariable should be before dropView.\n>\n\nfixed\n\nRegards\n\nPavel", "msg_date": "Sun, 26 Mar 2023 08:53:49 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nI just have a few minor wording improvements for the various comments /\ndocumentation you quoted.\n\nOn Sun, Mar 26, 2023 at 08:53:49AM +0200, Pavel Stehule wrote:\n> �t 21. 3. 2023 v 17:18 odes�latel Peter Eisentraut <\n> peter.eisentraut@enterprisedb.com> napsal:\n>\n> > - What is the purpose of struct Variable? It seems very similar to\n> > FormData_pg_variable. At least a comment would be useful.\n> >\n>\n> I wrote comment there:\n>\n>\n> /*\n> * The Variable struct is based on FormData_pg_variable struct. Against\n> * FormData_pg_variable it can hold node of deserialized expression used\n> * for calculation of default value.\n> */\n\nDid you mean \"Unlike\" rather than \"Against\"?\n\n> > 0002\n> >\n> > expr_kind_allows_session_variables() should have some explanation\n> > about criteria for determining which expression kinds should allow\n> > variables.\n> >\n>\n> I wrote comment there:\n>\n> /*\n> * Returns true, when expression of kind allows using of\n> * session variables.\n> + * The session's variables can be used everywhere where\n> + * can be used external parameters. Session variables\n> + * are not allowed in DDL. Session's variables cannot be\n> + * used in constraints.\n> + *\n> + * The identifier can be parsed as an session variable\n> + * only in expression's kinds where session's variables\n> + * are allowed. This is the primary usage of this function.\n> + *\n> + * Second usage of this function is for decision if\n> + * an error message \"column does not exist\" or \"column\n> + * or variable does not exist\" should be printed. When\n> + * we are in expression, where session variables cannot\n> + * be used, we raise the first form or error message.\n> */\n\nMaybe\n\n/*\n * Returns true if the given expression kind is valid for session variables\n * Session variables can be used everywhere where external parameters can be\n * used. Session variables are not allowed in DDL commands or in constraints.\n *\n * An identifier can be parsed as a session variable only for expression kinds\n * where session variables are allowed. This is the primary usage of this\n * function.\n *\n * Second usage of this function is to decide whether \"column does not exist\" or\n * \"column or variable does not exist\" error message should be printed.\n * When we are in an expression where session variables cannot be used, we raise\n * the first form or error message.\n */\n\n> > session_variables_ambiguity_warning: There needs to be more\n> > information about this. The current explanation is basically just,\n> > \"warn if your query is confusing\". Why do I want that? Why would I\n> > not want that? What is the alternative? What are some examples?\n> > Shouldn't there be a standard behavior without a need to configure\n> > anything?\n> >\n>\n> I enhanced this entry:\n>\n> + <para>\n> + The session variables can be shadowed by column references in a\n> query. This\n> + is an expected feature. The existing queries should not be broken\n> by creating\n> + any session variable, because session variables are shadowed\n> always if the\n> + identifier is ambiguous. The variables should be named without\n> possibility\n> + to collision with identifiers of other database objects (column\n> names or\n> + record field names). The warnings enabled by setting\n> <varname>session_variables_ambiguity_warning</varname>\n> + should help with finding identifier's collisions.\n\nMaybe\n\nSession variables can be shadowed by column references in a query, this is an\nexpected behavior. Previously working queries shouldn't error out by creating\nany session variable, so session variables are always shadowed if an identifier\nis ambiguous. Variables should be referenced using an unambiguous identifier\nwithout any possibility for a collision with identifier of other database\nobjects (column names or record fields names). The warning messages emitted\nwhen enabling <varname>session_variables_ambiguity_warning</varname> can help\nfinding such identifier collision.\n\n> + </para>\n> + <para>\n> + This feature can significantly increase size of logs, and then it\n> is\n> + disabled by default, but for testing or development environments it\n> + should be enabled.\n\nMaybe\n\nThis feature can significantly increase log size, so it's disabled by default.\nFor testing or development environments it's recommended to enable it if you\nuse session variables.\n\n\n", "msg_date": "Sun, 26 Mar 2023 19:32:05 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "> On Fri, Mar 24, 2023 at 08:04:08AM +0100, Pavel Stehule wrote:\n> čt 23. 3. 2023 v 19:54 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n> > čt 23. 3. 2023 v 16:33 odesílatel Peter Eisentraut <\n> > peter.eisentraut@enterprisedb.com> napsal:\n> >\n> >> The other issue is that by its nature this patch adds a lot of code in a\n> >> lot of places. Large patches are more likely to be successful if they\n> >> add a lot of code in one place or smaller amounts of code in a lot of\n> >> places. But this patch does both and it's just overwhelming. There is\n> >> so much new internal functionality and terminology. Variables can be\n> >> created, registered, initialized, stored, copied, prepared, set, freed,\n> >> removed, released, synced, dropped, and more. I don't know if anyone\n> >> has actually reviewed all that in detail.\n> >>\n> >> Has any effort been made to make this simpler, smaller, reduce scope,\n> >> refactoring, find commonalities with other features, try to manage the\n> >> complexity somehow?\n> >>\n> > I agree that this patch is large, but almost all code is simple. Complex\n> > code is \"only\" in 0002-session-variables.patch (113KB/438KB).\n> >\n> > Now, I have no idea how the functionality can be sensibly reduced or\n> > divided (no without significant performance loss). I see two difficult\n> > points in this code:\n> >\n> > 1. when to clean memory. The code implements cleaning very accurately -\n> > and this is unique in Postgres. Partially I implement some functionality of\n> > storage manager. Probably no code from Postgres can be reused, because\n> > there is not any support for global temporary objects. Cleaning based on\n> > sinval messages processing is difficult, but there is nothing else. The\n> > code is a little bit more complex, because there are three types of session\n> > variables: a) session variables, b) temp session variables, c) session\n> > variables with transaction scope. Maybe @c can be removed, and maybe we\n> > don't need to support not null default (this can simplify initialization).\n> > What do you think about it?\n> >\n> > 2. how to pass a variable's value to the executor. The implementation is\n> > based on extending the Param node, but it cannot reuse query params buffers\n> > and implements own.\n> > But it is hard to simplify code, because we want to support usage\n> > variables in queries, and usage in PL/pgSQL expressions too. And both are\n> > processed differently.\n> >\n>\n> Maybe I can divide the patch 0002-session-variables to three sections -\n> related to memory management, planning and execution?\n\nI agree, the patch scale is a bit overwhelming. It's worth noting that\ndue to the nature of this change certain heavy lifting has to be done in\nany case, plus I've got an impression that some part of the patch are\nquite solid (although I haven't reviewed everything, did anyone achieve\nthat milestone?). But still, it would be of great help to simplify the\ncurrent implementation, and I'm afraid the only way of doing this is to\nmake trades-off about functionality vs change size & complexity.\n\nMaybe instead splitting the patch into implementation components, it's\npossible to split it feature-by-feature, where every single patch would\nrepresent an independent (to a certain degree) functionality? I have in\nmind something like: catalog changes; base implementation; ACL support;\nxact actions implementation (on commit drop, etc); variables with\ndefault value; shadowing; etc. If such approach is possible, it will\ngive us: flexibility to apply only a subset of the whole patch series;\nsome understanding how much complexity is coming from each feature. What\ndo you think about this idea?\n\nI also recall somewhere earlier in the thread Pavel has mentioned that a\ntransactional version of session variables patch would be actually\nsimpler, and he has plans to implement it later on. Is there another\ntrade-off on the table we could think of, transactional vs\nnon-transactional session variables?\n\n\n", "msg_date": "Sun, 26 Mar 2023 19:42:42 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "> On Sun, Mar 26, 2023 at 07:32:05PM +0800, Julien Rouhaud wrote:\n> Hi,\n>\n> I just have a few minor wording improvements for the various comments /\n> documentation you quoted.\n\nTalking about documentation I've noticed that the implementation\ncontains few limitations, that are not mentioned in the docs. Examples\nare WITH queries:\n\n WITH x AS (LET public.svar = 100) SELECT * FROM x;\n ERROR: LET not supported in WITH query\n\nand using with set-returning functions (haven't found any related tests).\n\nAnother small note is about this change in the rowsecurity:\n\n /*\n -\t * For SELECT, UPDATE and DELETE, add security quals to enforce the USING\n -\t * policies. These security quals control access to existing table rows.\n -\t * Restrictive policies are combined together using AND, and permissive\n -\t * policies are combined together using OR.\n +\t * For SELECT, LET, UPDATE and DELETE, add security quals to enforce the\n +\t * USING policies. These security quals control access to existing table\n +\t * rows. Restrictive policies are combined together using AND, and\n +\t * permissive policies are combined together using OR.\n */\n\n From this commentary one may think that LET command supports row level\nsecurity, but I don't see it being implemented. A wrong commentary?\n\n\n", "msg_date": "Sun, 26 Mar 2023 19:51:10 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "ne 26. 3. 2023 v 13:32 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> Hi,\n>\n> I just have a few minor wording improvements for the various comments /\n> documentation you quoted.\n>\n> On Sun, Mar 26, 2023 at 08:53:49AM +0200, Pavel Stehule wrote:\n> > út 21. 3. 2023 v 17:18 odesílatel Peter Eisentraut <\n> > peter.eisentraut@enterprisedb.com> napsal:\n> >\n> > > - What is the purpose of struct Variable? It seems very similar to\n> > > FormData_pg_variable. At least a comment would be useful.\n> > >\n> >\n> > I wrote comment there:\n> >\n> >\n> > /*\n> > * The Variable struct is based on FormData_pg_variable struct. Against\n> > * FormData_pg_variable it can hold node of deserialized expression used\n> > * for calculation of default value.\n> > */\n>\n> Did you mean \"Unlike\" rather than \"Against\"?\n>\n\nfixed\n\n\n>\n> > > 0002\n> > >\n> > > expr_kind_allows_session_variables() should have some explanation\n> > > about criteria for determining which expression kinds should allow\n> > > variables.\n> > >\n> >\n> > I wrote comment there:\n> >\n> > /*\n> > * Returns true, when expression of kind allows using of\n> > * session variables.\n> > + * The session's variables can be used everywhere where\n> > + * can be used external parameters. Session variables\n> > + * are not allowed in DDL. Session's variables cannot be\n> > + * used in constraints.\n> > + *\n> > + * The identifier can be parsed as an session variable\n> > + * only in expression's kinds where session's variables\n> > + * are allowed. This is the primary usage of this function.\n> > + *\n> > + * Second usage of this function is for decision if\n> > + * an error message \"column does not exist\" or \"column\n> > + * or variable does not exist\" should be printed. When\n> > + * we are in expression, where session variables cannot\n> > + * be used, we raise the first form or error message.\n> > */\n>\n> Maybe\n>\n> /*\n> * Returns true if the given expression kind is valid for session variables\n> * Session variables can be used everywhere where external parameters can\n> be\n> * used. Session variables are not allowed in DDL commands or in\n> constraints.\n> *\n> * An identifier can be parsed as a session variable only for expression\n> kinds\n> * where session variables are allowed. This is the primary usage of this\n> * function.\n> *\n> * Second usage of this function is to decide whether \"column does not\n> exist\" or\n> * \"column or variable does not exist\" error message should be printed.\n> * When we are in an expression where session variables cannot be used, we\n> raise\n> * the first form or error message.\n> */\n>\n\nchanged\n\n\n>\n> > > session_variables_ambiguity_warning: There needs to be more\n> > > information about this. The current explanation is basically just,\n> > > \"warn if your query is confusing\". Why do I want that? Why would I\n> > > not want that? What is the alternative? What are some examples?\n> > > Shouldn't there be a standard behavior without a need to configure\n> > > anything?\n> > >\n> >\n> > I enhanced this entry:\n> >\n> > + <para>\n> > + The session variables can be shadowed by column references in a\n> > query. This\n> > + is an expected feature. The existing queries should not be\n> broken\n> > by creating\n> > + any session variable, because session variables are shadowed\n> > always if the\n> > + identifier is ambiguous. The variables should be named without\n> > possibility\n> > + to collision with identifiers of other database objects (column\n> > names or\n> > + record field names). The warnings enabled by setting\n> > <varname>session_variables_ambiguity_warning</varname>\n> > + should help with finding identifier's collisions.\n>\n> Maybe\n>\n> Session variables can be shadowed by column references in a query, this is\n> an\n> expected behavior. Previously working queries shouldn't error out by\n> creating\n> any session variable, so session variables are always shadowed if an\n> identifier\n> is ambiguous. Variables should be referenced using an unambiguous\n> identifier\n> without any possibility for a collision with identifier of other database\n> objects (column names or record fields names). The warning messages\n> emitted\n> when enabling <varname>session_variables_ambiguity_warning</varname> can\n> help\n> finding such identifier collision.\n>\n> > + </para>\n> > + <para>\n> > + This feature can significantly increase size of logs, and then\n> it\n> > is\n> > + disabled by default, but for testing or development\n> environments it\n> > + should be enabled.\n>\n> Maybe\n>\n> This feature can significantly increase log size, so it's disabled by\n> default.\n> For testing or development environments it's recommended to enable it if\n> you\n> use session variables.\n>\n\nreplaced\n\nThank you very much for these language correctures\n\nRegards\n\nPavel\n\np.s. I'll send updated patch after today or tomorrow - I have to fix broken\ndependency check after rebase\n\nne 26. 3. 2023 v 13:32 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:Hi,\n\nI just have a few minor wording improvements for the various comments /\ndocumentation you quoted.\n\nOn Sun, Mar 26, 2023 at 08:53:49AM +0200, Pavel Stehule wrote:\n> út 21. 3. 2023 v 17:18 odesílatel Peter Eisentraut <\n> peter.eisentraut@enterprisedb.com> napsal:\n>\n> > - What is the purpose of struct Variable?  It seems very similar to\n> >    FormData_pg_variable.  At least a comment would be useful.\n> >\n>\n> I wrote comment there:\n>\n>\n> /*\n>  * The Variable struct is based on FormData_pg_variable struct. Against\n>  * FormData_pg_variable it can hold node of deserialized expression used\n>  * for calculation of default value.\n>  */\n\nDid you mean \"Unlike\" rather than \"Against\"?fixed \n\n> > 0002\n> >\n> > expr_kind_allows_session_variables() should have some explanation\n> > about criteria for determining which expression kinds should allow\n> > variables.\n> >\n>\n> I wrote comment there:\n>\n>  /*\n>   * Returns true, when expression of kind allows using of\n>   * session variables.\n> + * The session's variables can be used everywhere where\n> + * can be used external parameters. Session variables\n> + * are not allowed in DDL. Session's variables cannot be\n> + * used in constraints.\n> + *\n> + * The identifier can be parsed as an session variable\n> + * only in expression's kinds where session's variables\n> + * are allowed. This is the primary usage of this function.\n> + *\n> + * Second usage of this function is for decision if\n> + * an error message \"column does not exist\" or \"column\n> + * or variable does not exist\" should be printed. When\n> + * we are in expression, where session variables cannot\n> + * be used, we raise the first form or error message.\n>   */\n\nMaybe\n\n/*\n * Returns true if the given expression kind is valid for session variables\n * Session variables can be used everywhere where external parameters can be\n * used.  Session variables are not allowed in DDL commands or in constraints.\n *\n * An identifier can be parsed as a session variable only for expression kinds\n * where session variables are allowed. This is the primary usage of this\n * function.\n *\n * Second usage of this function is to decide whether \"column does not exist\" or\n * \"column or variable does not exist\" error message should be printed.\n * When we are in an expression where session variables cannot be used, we raise\n * the first form or error message.\n */changed \n\n> > session_variables_ambiguity_warning: There needs to be more\n> > information about this.  The current explanation is basically just,\n> > \"warn if your query is confusing\".  Why do I want that?  Why would I\n> > not want that?  What is the alternative?  What are some examples?\n> > Shouldn't there be a standard behavior without a need to configure\n> > anything?\n> >\n>\n> I enhanced this entry:\n>\n> +       <para>\n> +        The session variables can be shadowed by column references in a\n> query. This\n> +        is an expected feature. The existing queries should not be broken\n> by creating\n> +        any session variable, because session variables are shadowed\n> always if the\n> +        identifier is ambiguous. The variables should be named without\n> possibility\n> +        to collision with identifiers of other database objects (column\n> names or\n> +        record field names). The warnings enabled by setting\n> <varname>session_variables_ambiguity_warning</varname>\n> +        should help with finding identifier's collisions.\n\nMaybe\n\nSession variables can be shadowed by column references in a query, this is an\nexpected behavior.  Previously working queries shouldn't error out by creating\nany session variable, so session variables are always shadowed if an identifier\nis ambiguous.  Variables should be referenced using an unambiguous identifier\nwithout any possibility for a collision with identifier of other database\nobjects (column names or record fields names).  The warning messages emitted\nwhen enabling <varname>session_variables_ambiguity_warning</varname> can help\nfinding such identifier collision.\n\n> +       </para>\n> +       <para>\n> +        This feature can significantly increase size of logs, and then it\n> is\n> +        disabled by default, but for testing or development environments it\n> +        should be enabled.\n\nMaybe\n\nThis feature can significantly increase log size, so it's disabled by default.\nFor testing or development environments it's recommended to enable it if you\nuse session variables.replacedThank you very much for these language correcturesRegardsPavelp.s. I'll send updated patch after today or tomorrow - I have to fix broken dependency check after rebase", "msg_date": "Tue, 28 Mar 2023 21:03:47 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\r\n\r\nne 26. 3. 2023 v 19:53 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\r\nnapsal:\r\n\r\n> > On Sun, Mar 26, 2023 at 07:32:05PM +0800, Julien Rouhaud wrote:\r\n> > Hi,\r\n> >\r\n> > I just have a few minor wording improvements for the various comments /\r\n> > documentation you quoted.\r\n>\r\n> Talking about documentation I've noticed that the implementation\r\n> contains few limitations, that are not mentioned in the docs. Examples\r\n> are WITH queries:\r\n>\r\n> WITH x AS (LET public.svar = 100) SELECT * FROM x;\r\n> ERROR: LET not supported in WITH query\r\n>\r\n\r\n The LET statement doesn't support the RETURNING clause, so using inside\r\nCTE does not make any sense.\r\n\r\nDo you have some tips, where this behaviour should be mentioned?\r\n\r\n\r\n> and using with set-returning functions (haven't found any related tests).\r\n>\r\n\r\nThere it is:\r\n\r\n+CREATE VARIABLE public.svar AS int;\r\n+-- should be ok\r\n+LET public.svar = generate_series(1, 1);\r\n+-- should fail\r\n+LET public.svar = generate_series(1, 2);\r\n+ERROR: expression returned more than one row\r\n+LET public.svar = generate_series(1, 0);\r\n+ERROR: expression returned no rows\r\n+DROP VARIABLE public.svar;\r\n\r\n\r\n>\r\n> Another small note is about this change in the rowsecurity:\r\n>\r\n> /*\r\n> - * For SELECT, UPDATE and DELETE, add security quals to enforce\r\n> the USING\r\n> - * policies. These security quals control access to existing\r\n> table rows.\r\n> - * Restrictive policies are combined together using AND, and\r\n> permissive\r\n> - * policies are combined together using OR.\r\n> + * For SELECT, LET, UPDATE and DELETE, add security quals to\r\n> enforce the\r\n> + * USING policies. These security quals control access to\r\n> existing table\r\n> + * rows. Restrictive policies are combined together using AND, and\r\n> + * permissive policies are combined together using OR.\r\n> */\r\n>\r\n> From this commentary one may think that LET command supports row level\r\n> security, but I don't see it being implemented. A wrong commentary?\r\n>\r\n\r\nI don't think so. The row level security should be supported. I tested it\r\non example from doc:\r\n\r\nCREATE TABLE public.accounts (\r\n manager text,\r\n company text,\r\n contact_email text\r\n);\r\n\r\nCREATE VARIABLE public.v AS text;\r\n\r\nCOPY public.accounts (manager, company, contact_email) FROM stdin;\r\nt1role xxx t1role@xxx.org\r\nt2role yyy t2role@yyy.org\r\n\\.\r\n\r\nCREATE POLICY account_managers ON public.accounts USING ((manager =\r\nCURRENT_USER));\r\nALTER TABLE public.accounts ENABLE ROW LEVEL SECURITY;\r\n\r\nGRANT SELECT,INSERT ON TABLE public.accounts TO t1role;\r\nGRANT SELECT,INSERT ON TABLE public.accounts TO t2role;\r\n\r\nGRANT ALL ON VARIABLE public.v TO t1role;\r\nGRANT ALL ON VARIABLE public.v TO t2role;\r\n\r\n\r\n[pavel@localhost postgresql.master]$ psql\r\nAssertions: on\r\npsql (16devel)\r\nType \"help\" for help.\r\n\r\n(2023-03-28 21:32:33) postgres=# set role to t1role;\r\nSET\r\n(2023-03-28 21:32:40) postgres=# select * from accounts ;\r\n┌─────────┬─────────┬────────────────┐\r\n│ manager │ company │ contact_email │\r\n╞═════════╪═════════╪════════════════╡\r\n│ t1role │ xxx │ t1role@xxx.org │\r\n└─────────┴─────────┴────────────────┘\r\n(1 row)\r\n\r\n(2023-03-28 21:32:45) postgres=# let v = (select company from accounts);\r\nLET\r\n(2023-03-28 21:32:58) postgres=# select v;\r\n┌─────┐\r\n│ v │\r\n╞═════╡\r\n│ xxx │\r\n└─────┘\r\n(1 row)\r\n\r\n(2023-03-28 21:33:03) postgres=# set role to default;\r\nSET\r\n(2023-03-28 21:33:12) postgres=# set role to t2role;\r\nSET\r\n(2023-03-28 21:33:19) postgres=# select * from accounts ;\r\n┌─────────┬─────────┬────────────────┐\r\n│ manager │ company │ contact_email │\r\n╞═════════╪═════════╪════════════════╡\r\n│ t2role │ yyy │ t2role@yyy.org │\r\n└─────────┴─────────┴────────────────┘\r\n(1 row)\r\n\r\n(2023-03-28 21:33:22) postgres=# let v = (select company from accounts);\r\nLET\r\n(2023-03-28 21:33:26) postgres=# select v;\r\n┌─────┐\r\n│ v │\r\n╞═════╡\r\n│ yyy │\r\n└─────┘\r\n(1 row)\r\n\r\n\r\nRegards\r\n\r\nPavel\r\n\nHine 26. 3. 2023 v 19:53 odesílatel Dmitry Dolgov <9erthalion6@gmail.com> napsal:> On Sun, Mar 26, 2023 at 07:32:05PM +0800, Julien Rouhaud wrote:\n> Hi,\n>\n> I just have a few minor wording improvements for the various comments /\n> documentation you quoted.\n\nTalking about documentation I've noticed that the implementation\ncontains few limitations, that are not mentioned in the docs. Examples\nare WITH queries:\n\n    WITH x AS (LET public.svar = 100) SELECT * FROM x;\n    ERROR:  LET not supported in WITH query The LET statement doesn't support the RETURNING clause, so using inside CTE does not make any sense. Do you have some tips, where this behaviour should be mentioned?\n\nand using with set-returning functions (haven't found any related tests).There it is:+CREATE VARIABLE public.svar AS int;+-- should be ok+LET public.svar = generate_series(1, 1);+-- should fail+LET public.svar = generate_series(1, 2);+ERROR:  expression returned more than one row+LET public.svar = generate_series(1, 0);+ERROR:  expression returned no rows+DROP VARIABLE public.svar; \n\nAnother small note is about this change in the rowsecurity:\n\n        /*\n    -    * For SELECT, UPDATE and DELETE, add security quals to enforce the USING\n    -    * policies.  These security quals control access to existing table rows.\n    -    * Restrictive policies are combined together using AND, and permissive\n    -    * policies are combined together using OR.\n    +    * For SELECT, LET, UPDATE and DELETE, add security quals to enforce the\n    +    * USING policies.  These security quals control access to existing table\n    +    * rows. Restrictive policies are combined together using AND, and\n    +    * permissive policies are combined together using OR.\n         */\n\n From this commentary one may think that LET command supports row level\nsecurity, but I don't see it being implemented. A wrong commentary?I don't think so.  The row level security should be supported. I tested it on example from doc:CREATE TABLE public.accounts (    manager text,    company text,    contact_email text);CREATE VARIABLE public.v AS text;COPY public.accounts (manager, company, contact_email) FROM stdin;t1role\txxx\tt1role@xxx.orgt2role\tyyy\tt2role@yyy.org\\.CREATE POLICY account_managers ON public.accounts USING ((manager = CURRENT_USER));ALTER TABLE public.accounts ENABLE ROW LEVEL SECURITY;GRANT SELECT,INSERT ON TABLE public.accounts TO t1role;GRANT SELECT,INSERT ON TABLE public.accounts TO t2role;GRANT ALL ON VARIABLE public.v TO t1role;GRANT ALL ON VARIABLE public.v TO t2role;[pavel@localhost postgresql.master]$ psqlAssertions: onpsql (16devel)Type \"help\" for help.(2023-03-28 21:32:33) postgres=# set role to t1role;SET(2023-03-28 21:32:40) postgres=# select * from accounts ;┌─────────┬─────────┬────────────────┐│ manager │ company │ contact_email  │╞═════════╪═════════╪════════════════╡│ t1role  │ xxx     │ t1role@xxx.org │└─────────┴─────────┴────────────────┘(1 row)(2023-03-28 21:32:45) postgres=# let v = (select company from accounts);LET(2023-03-28 21:32:58) postgres=# select v;┌─────┐│  v  │╞═════╡│ xxx │└─────┘(1 row)(2023-03-28 21:33:03) postgres=# set role to default;SET(2023-03-28 21:33:12) postgres=# set role to t2role;SET(2023-03-28 21:33:19) postgres=# select * from accounts ;┌─────────┬─────────┬────────────────┐│ manager │ company │ contact_email  │╞═════════╪═════════╪════════════════╡│ t2role  │ yyy     │ t2role@yyy.org │└─────────┴─────────┴────────────────┘(1 row)(2023-03-28 21:33:22) postgres=# let v = (select company from accounts);LET(2023-03-28 21:33:26) postgres=# select v;┌─────┐│  v  │╞═════╡│ yyy │└─────┘(1 row)RegardsPavel", "msg_date": "Tue, 28 Mar 2023 21:34:20 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "ne 26. 3. 2023 v 13:32 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> Hi,\n>\n> I just have a few minor wording improvements for the various comments /\n> documentation you quoted.\n>\n> On Sun, Mar 26, 2023 at 08:53:49AM +0200, Pavel Stehule wrote:\n> > út 21. 3. 2023 v 17:18 odesílatel Peter Eisentraut <\n> > peter.eisentraut@enterprisedb.com> napsal:\n> >\n> > > - What is the purpose of struct Variable? It seems very similar to\n> > > FormData_pg_variable. At least a comment would be useful.\n> > >\n> >\n> > I wrote comment there:\n> >\n> >\n> > /*\n> > * The Variable struct is based on FormData_pg_variable struct. Against\n> > * FormData_pg_variable it can hold node of deserialized expression used\n> > * for calculation of default value.\n> > */\n>\n> Did you mean \"Unlike\" rather than \"Against\"?\n>\n> > > 0002\n> > >\n> > > expr_kind_allows_session_variables() should have some explanation\n> > > about criteria for determining which expression kinds should allow\n> > > variables.\n> > >\n> >\n> > I wrote comment there:\n> >\n> > /*\n> > * Returns true, when expression of kind allows using of\n> > * session variables.\n> > + * The session's variables can be used everywhere where\n> > + * can be used external parameters. Session variables\n> > + * are not allowed in DDL. Session's variables cannot be\n> > + * used in constraints.\n> > + *\n> > + * The identifier can be parsed as an session variable\n> > + * only in expression's kinds where session's variables\n> > + * are allowed. This is the primary usage of this function.\n> > + *\n> > + * Second usage of this function is for decision if\n> > + * an error message \"column does not exist\" or \"column\n> > + * or variable does not exist\" should be printed. When\n> > + * we are in expression, where session variables cannot\n> > + * be used, we raise the first form or error message.\n> > */\n>\n> Maybe\n>\n> /*\n> * Returns true if the given expression kind is valid for session variables\n> * Session variables can be used everywhere where external parameters can\n> be\n> * used. Session variables are not allowed in DDL commands or in\n> constraints.\n> *\n> * An identifier can be parsed as a session variable only for expression\n> kinds\n> * where session variables are allowed. This is the primary usage of this\n> * function.\n> *\n> * Second usage of this function is to decide whether \"column does not\n> exist\" or\n> * \"column or variable does not exist\" error message should be printed.\n> * When we are in an expression where session variables cannot be used, we\n> raise\n> * the first form or error message.\n> */\n>\n> > > session_variables_ambiguity_warning: There needs to be more\n> > > information about this. The current explanation is basically just,\n> > > \"warn if your query is confusing\". Why do I want that? Why would I\n> > > not want that? What is the alternative? What are some examples?\n> > > Shouldn't there be a standard behavior without a need to configure\n> > > anything?\n> > >\n> >\n> > I enhanced this entry:\n> >\n> > + <para>\n> > + The session variables can be shadowed by column references in a\n> > query. This\n> > + is an expected feature. The existing queries should not be\n> broken\n> > by creating\n> > + any session variable, because session variables are shadowed\n> > always if the\n> > + identifier is ambiguous. The variables should be named without\n> > possibility\n> > + to collision with identifiers of other database objects (column\n> > names or\n> > + record field names). The warnings enabled by setting\n> > <varname>session_variables_ambiguity_warning</varname>\n> > + should help with finding identifier's collisions.\n>\n> Maybe\n>\n> Session variables can be shadowed by column references in a query, this is\n> an\n> expected behavior. Previously working queries shouldn't error out by\n> creating\n> any session variable, so session variables are always shadowed if an\n> identifier\n> is ambiguous. Variables should be referenced using an unambiguous\n> identifier\n> without any possibility for a collision with identifier of other database\n> objects (column names or record fields names). The warning messages\n> emitted\n> when enabling <varname>session_variables_ambiguity_warning</varname> can\n> help\n> finding such identifier collision.\n>\n> > + </para>\n> > + <para>\n> > + This feature can significantly increase size of logs, and then\n> it\n> > is\n> > + disabled by default, but for testing or development\n> environments it\n> > + should be enabled.\n>\n> Maybe\n>\n> This feature can significantly increase log size, so it's disabled by\n> default.\n> For testing or development environments it's recommended to enable it if\n> you\n> use session variables.\n>\n\nwith language correctures\n\nRegards\n\nPavel", "msg_date": "Wed, 29 Mar 2023 08:04:42 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On 24.03.23 08:04, Pavel Stehule wrote:\n> Maybe I can divide the  patch 0002-session-variables to three sections - \n> related to memory management, planning and execution?\n\nPersonally, I find the existing split not helpful. There is no value \n(to me) in putting code, documentation, and tests in three separate \npatches. This is in fact counter-helpful (to me). Things like the \nDISCARD command (0005) and the error messages changes (0009) can be \nseparate patches, but most of the rest should probably be a single patch.\n\nI know you have been asked earlier in the thread to provide smaller \npatches, so don't change it just for me, but this is my opinion.\n\n\n\n", "msg_date": "Wed, 29 Mar 2023 12:17:29 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nne 26. 3. 2023 v 19:44 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\nnapsal:\n\n> > On Fri, Mar 24, 2023 at 08:04:08AM +0100, Pavel Stehule wrote:\n> > čt 23. 3. 2023 v 19:54 odesílatel Pavel Stehule <pavel.stehule@gmail.com\n> >\n> > napsal:\n> >\n> > > čt 23. 3. 2023 v 16:33 odesílatel Peter Eisentraut <\n> > > peter.eisentraut@enterprisedb.com> napsal:\n> > >\n> > >> The other issue is that by its nature this patch adds a lot of code\n> in a\n> > >> lot of places. Large patches are more likely to be successful if they\n> > >> add a lot of code in one place or smaller amounts of code in a lot of\n> > >> places. But this patch does both and it's just overwhelming. There\n> is\n> > >> so much new internal functionality and terminology. Variables can be\n> > >> created, registered, initialized, stored, copied, prepared, set,\n> freed,\n> > >> removed, released, synced, dropped, and more. I don't know if anyone\n> > >> has actually reviewed all that in detail.\n> > >>\n> > >> Has any effort been made to make this simpler, smaller, reduce scope,\n> > >> refactoring, find commonalities with other features, try to manage the\n> > >> complexity somehow?\n> > >>\n> > > I agree that this patch is large, but almost all code is simple.\n> Complex\n> > > code is \"only\" in 0002-session-variables.patch (113KB/438KB).\n> > >\n> > > Now, I have no idea how the functionality can be sensibly reduced or\n> > > divided (no without significant performance loss). I see two difficult\n> > > points in this code:\n> > >\n> > > 1. when to clean memory. The code implements cleaning very accurately -\n> > > and this is unique in Postgres. Partially I implement some\n> functionality of\n> > > storage manager. Probably no code from Postgres can be reused, because\n> > > there is not any support for global temporary objects. Cleaning based\n> on\n> > > sinval messages processing is difficult, but there is nothing else.\n> The\n> > > code is a little bit more complex, because there are three types of\n> session\n> > > variables: a) session variables, b) temp session variables, c) session\n> > > variables with transaction scope. Maybe @c can be removed, and maybe we\n> > > don't need to support not null default (this can simplify\n> initialization).\n> > > What do you think about it?\n> > >\n> > > 2. how to pass a variable's value to the executor. The implementation\n> is\n> > > based on extending the Param node, but it cannot reuse query params\n> buffers\n> > > and implements own.\n> > > But it is hard to simplify code, because we want to support usage\n> > > variables in queries, and usage in PL/pgSQL expressions too. And both\n> are\n> > > processed differently.\n> > >\n> >\n> > Maybe I can divide the patch 0002-session-variables to three sections -\n> > related to memory management, planning and execution?\n>\n> I agree, the patch scale is a bit overwhelming. It's worth noting that\n> due to the nature of this change certain heavy lifting has to be done in\n> any case, plus I've got an impression that some part of the patch are\n> quite solid (although I haven't reviewed everything, did anyone achieve\n> that milestone?). But still, it would be of great help to simplify the\n> current implementation, and I'm afraid the only way of doing this is to\n> make trades-off about functionality vs change size & complexity.\n>\n\nThere is not too much space for reduction - more - sometimes there is code\nreuse between features.\n\nI can reduce temporary session variables, but the same AtSubXact routines\nare used by memory purging routines, and if only if you drop all dependent\nfeatures, then you can get some interesting number of reduced lines. I can\nimagine very reduced feature set like\n\n1) no temporary variables, no reset at transaction end\n2) without default expressions - default is null\n3) direct memory cleaning on drop (without possibility of saved value after\nreverted drop) or cleaning at session end always\n\nNote - @1 and @3 shares code\n\nThis reduced implementation can still be useful. Probably it doesn't reduce\ntoo much code, but it can reduce non trivial code. I believe so almost all\nnot reduced code will be almost trivial\n\n\n\n>\n> Maybe instead splitting the patch into implementation components, it's\n> possible to split it feature-by-feature, where every single patch would\n> represent an independent (to a certain degree) functionality? I have in\n> mind something like: catalog changes; base implementation; ACL support;\n> xact actions implementation (on commit drop, etc); variables with\n> default value; shadowing; etc. If such approach is possible, it will\n> give us: flexibility to apply only a subset of the whole patch series;\n> some understanding how much complexity is coming from each feature. What\n> do you think about this idea?\n>\n\nI think cleaning, dropping can be moved to a separate patch. ACL support\nuses generic support (it is only a few lines).\n\nThe patch 02 can be splitted - I am not sure how these parts can be\nindependent. I'll try to split this patch, and we will see if it will be\nbetter.\n\n\n\n> I also recall somewhere earlier in the thread Pavel has mentioned that a\n> transactional version of session variables patch would be actually\n> simpler, and he has plans to implement it later on. Is there another\n> trade-off on the table we could think of, transactional vs\n> non-transactional session variables?\n>\n\nMaybe I didn't use the correct words. Implementation of transactional\nbehaviour can be relatively simple, but only if there is support for non-\ntransactional behaviour already.\n\nThe transactional variables need a little bit more code, because you should\nimplement mvcc. Current implementation is partially transactional - there\nare supported transactions and sub-transactions on catalog (and related\nmemory cleaning), the variables by themselves are not transactional.\nImplementing mvcc is not too difficult - because there are already routines\nrelated to handling subtransactions. But it increases the complexity of\nthese routines, so I postponed support for transactional variables to the\nnext step.\n\nRegards\n\nPavel\n\nHine 26. 3. 2023 v 19:44 odesílatel Dmitry Dolgov <9erthalion6@gmail.com> napsal:> On Fri, Mar 24, 2023 at 08:04:08AM +0100, Pavel Stehule wrote:\n> čt 23. 3. 2023 v 19:54 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n> > čt 23. 3. 2023 v 16:33 odesílatel Peter Eisentraut <\n> > peter.eisentraut@enterprisedb.com> napsal:\n> >\n> >> The other issue is that by its nature this patch adds a lot of code in a\n> >> lot of places.  Large patches are more likely to be successful if they\n> >> add a lot of code in one place or smaller amounts of code in a lot of\n> >> places.  But this patch does both and it's just overwhelming.  There is\n> >> so much new internal functionality and terminology.  Variables can be\n> >> created, registered, initialized, stored, copied, prepared, set, freed,\n> >> removed, released, synced, dropped, and more.  I don't know if anyone\n> >> has actually reviewed all that in detail.\n> >>\n> >> Has any effort been made to make this simpler, smaller, reduce scope,\n> >> refactoring, find commonalities with other features, try to manage the\n> >> complexity somehow?\n> >>\n> > I agree that this patch is large, but almost all code is simple. Complex\n> > code is \"only\" in 0002-session-variables.patch (113KB/438KB).\n> >\n> > Now, I have no idea how the functionality can be sensibly reduced or\n> > divided (no without significant performance loss). I see two difficult\n> > points in this code:\n> >\n> > 1. when to clean memory. The code implements cleaning very accurately -\n> > and this is unique in Postgres. Partially I implement some functionality of\n> > storage manager. Probably no code from Postgres can be reused, because\n> > there is not any support for global temporary objects. Cleaning based on\n> > sinval messages processing is difficult, but there is nothing else.  The\n> > code is a little bit more complex, because there are three types of session\n> > variables: a) session variables, b) temp session variables, c) session\n> > variables with transaction scope. Maybe @c can be removed, and maybe we\n> > don't need to support not null default (this can simplify initialization).\n> > What do you think about it?\n> >\n> > 2. how to pass a variable's value to the executor. The implementation is\n> > based on extending the Param node, but it cannot reuse query params buffers\n> > and implements own.\n> > But it is hard to simplify code, because we want to support usage\n> > variables in queries, and usage in PL/pgSQL expressions too. And both are\n> > processed differently.\n> >\n>\n> Maybe I can divide the  patch 0002-session-variables to three sections -\n> related to memory management, planning and execution?\n\nI agree, the patch scale is a bit overwhelming. It's worth noting that\ndue to the nature of this change certain heavy lifting has to be done in\nany case, plus I've got an impression that some part of the patch are\nquite solid (although I haven't reviewed everything, did anyone achieve\nthat milestone?). But still, it would be of great help to simplify the\ncurrent implementation, and I'm afraid the only way of doing this is to\nmake trades-off about functionality vs change size & complexity.There is not too much space for reduction - more - sometimes there is code reuse between features.I can reduce temporary session variables, but the same AtSubXact routines are used by memory purging routines, and if only if  you drop all dependent features, then you can get some interesting number of reduced lines. I can imagine very reduced feature set like1) no temporary variables, no reset at transaction end2) without default expressions - default is null3) direct memory cleaning on drop (without possibility of saved value after reverted drop) or cleaning at session end alwaysNote - @1 and @3 shares codeThis reduced implementation can still be useful. Probably it doesn't reduce too much code, but it can reduce non trivial code. I believe so almost all not reduced code will be almost trivial \n\nMaybe instead splitting the patch into implementation components, it's\npossible to split it feature-by-feature, where every single patch would\nrepresent an independent (to a certain degree) functionality? I have in\nmind something like: catalog changes; base implementation; ACL support;\nxact actions implementation (on commit drop, etc); variables with\ndefault value; shadowing; etc. If such approach is possible, it will\ngive us: flexibility to apply only a subset of the whole patch series;\nsome understanding how much complexity is coming from each feature. What\ndo you think about this idea?I think cleaning, dropping can be moved to a separate patch. ACL support uses generic support (it is only a few lines). The patch 02 can be splitted - I am not sure how these parts can be independent. I'll try to split this patch, and we will see if it will be better.\n\nI also recall somewhere earlier in the thread Pavel has mentioned that a\ntransactional version of session variables patch would be actually\nsimpler, and he has plans to implement it later on. Is there another\ntrade-off on the table we could think of, transactional vs\nnon-transactional session variables?Maybe I didn't use the correct words.  Implementation of transactional behaviour can be relatively simple, but only if there is support for non- transactional behaviour already.The transactional variables need a little bit more code, because you should implement mvcc. Current implementation is partially transactional - there are supported transactions and sub-transactions on catalog (and related memory cleaning), the variables by themselves are not transactional. Implementing mvcc is not too difficult - because there are already routines related to handling subtransactions. But it increases the complexity of these routines, so I postponed support for transactional variables to the next step.RegardsPavel", "msg_date": "Thu, 30 Mar 2023 10:05:30 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nst 29. 3. 2023 v 12:17 odesílatel Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> napsal:\n\n> On 24.03.23 08:04, Pavel Stehule wrote:\n> > Maybe I can divide the patch 0002-session-variables to three sections -\n> > related to memory management, planning and execution?\n>\n> Personally, I find the existing split not helpful. There is no value\n> (to me) in putting code, documentation, and tests in three separate\n> patches. This is in fact counter-helpful (to me). Things like the\n> DISCARD command (0005) and the error messages changes (0009) can be\n> separate patches, but most of the rest should probably be a single patch.\n>\n> I know you have been asked earlier in the thread to provide smaller\n> patches, so don't change it just for me, but this is my opinion.\n>\n\nIf I reorganize the patch to the following structure, can be it useful for\nyou?\n\n1. really basic functionality (no temporary variables, no def expressions,\nno memory cleaning)\n SELECT variable\n LET should be supported + doc, + related tests.\n\n2. support for temporary variables (session, transaction scope),\n memory cleaning at the end of transaction\n\n3. PL/pgSQL support\n4. pg_dump\n5. shadowing warning\n6. ... others ...\n\nCan it be better for you?\n\nRegards\n\nPavel\n\nHist 29. 3. 2023 v 12:17 odesílatel Peter Eisentraut <peter.eisentraut@enterprisedb.com> napsal:On 24.03.23 08:04, Pavel Stehule wrote:\n> Maybe I can divide the  patch 0002-session-variables to three sections - \n> related to memory management, planning and execution?\n\nPersonally, I find the existing split not helpful.  There is no value \n(to me) in putting code, documentation, and tests in three separate \npatches.  This is in fact counter-helpful (to me).  Things like the \nDISCARD command (0005) and the error messages changes (0009) can be \nseparate patches, but most of the rest should probably be a single patch.\n\nI know you have been asked earlier in the thread to provide smaller \npatches, so don't change it just for me, but this is my opinion.If I reorganize the patch to the following structure, can be it useful for you?1. really basic functionality (no temporary variables, no def expressions, no memory cleaning)   SELECT variable   LET should be supported + doc, + related tests.2. support for temporary variables (session, transaction scope),    memory cleaning at the end of transaction3. PL/pgSQL support4. pg_dump5. shadowing warning6. ... others ... Can it be better for you?RegardsPavel", "msg_date": "Thu, 30 Mar 2023 10:49:46 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On 30.03.23 10:49, Pavel Stehule wrote:\n> If I reorganize the patch to the following structure, can be it useful \n> for you?\n> \n> 1. really basic functionality (no temporary variables, no def \n> expressions, no memory cleaning)\n>    SELECT variable\n>    LET should be supported + doc, + related tests.\n> \n> 2. support for temporary variables (session, transaction scope),\n>     memory cleaning at the end of transaction\n> \n> 3. PL/pgSQL support\n> 4. pg_dump\n> 5. shadowing warning\n> 6. ... others ...\n\nThat seems like an ok approach. The pg_dump support should probably go \ninto the first patch, so it's self-contained.\n\n\n", "msg_date": "Thu, 30 Mar 2023 15:40:27 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "> On Tue, Mar 28, 2023 at 09:34:20PM +0200, Pavel Stehule wrote:\n> Hi\n>\n> > Talking about documentation I've noticed that the implementation\n> > contains few limitations, that are not mentioned in the docs. Examples\n> > are WITH queries:\n> >\n> > WITH x AS (LET public.svar = 100) SELECT * FROM x;\n> > ERROR: LET not supported in WITH query\n> >\n>\n> The LET statement doesn't support the RETURNING clause, so using inside\n> CTE does not make any sense.\n>\n> Do you have some tips, where this behaviour should be mentioned?\n\nYeah, you're right, it's probably not worth adding. I usually find it a\ngood idea to explicitly mention any limitations, but WITH docs are\nactually have one line about statements without the RETURNING clause,\nplus indeed for LET it makes even less sense.\n\n> > and using with set-returning functions (haven't found any related tests).\n> >\n>\n> There it is:\n>\n> +CREATE VARIABLE public.svar AS int;\n> +-- should be ok\n> +LET public.svar = generate_series(1, 1);\n> +-- should fail\n> +LET public.svar = generate_series(1, 2);\n> +ERROR: expression returned more than one row\n> +LET public.svar = generate_series(1, 0);\n> +ERROR: expression returned no rows\n> +DROP VARIABLE public.svar;\n\nOh, interesting. I was looking for another error message from\nparse_func.c:\n\n set-returning functions are not allowed in LET assignment expression\n\nIs this one you've posted somehow different?\n\n> > Another small note is about this change in the rowsecurity:\n> >\n> > /*\n> > - * For SELECT, UPDATE and DELETE, add security quals to enforce\n> > the USING\n> > - * policies. These security quals control access to existing\n> > table rows.\n> > - * Restrictive policies are combined together using AND, and\n> > permissive\n> > - * policies are combined together using OR.\n> > + * For SELECT, LET, UPDATE and DELETE, add security quals to\n> > enforce the\n> > + * USING policies. These security quals control access to\n> > existing table\n> > + * rows. Restrictive policies are combined together using AND, and\n> > + * permissive policies are combined together using OR.\n> > */\n> >\n> > From this commentary one may think that LET command supports row level\n> > security, but I don't see it being implemented. A wrong commentary?\n> >\n>\n> I don't think so. The row level security should be supported. I tested it\n> on example from doc:\n>\n> [...]\n>\n> (2023-03-28 21:32:33) postgres=# set role to t1role;\n> SET\n> (2023-03-28 21:32:40) postgres=# select * from accounts ;\n> ┌─────────┬─────────┬────────────────┐\n> │ manager │ company │ contact_email │\n> ╞═════════╪═════════╪════════════════╡\n> │ t1role │ xxx │ t1role@xxx.org │\n> └─────────┴─────────┴────────────────┘\n> (1 row)\n>\n> (2023-03-28 21:32:45) postgres=# let v = (select company from accounts);\n> LET\n> (2023-03-28 21:32:58) postgres=# select v;\n> ┌─────┐\n> │ v │\n> ╞═════╡\n> │ xxx │\n> └─────┘\n> (1 row)\n>\n> (2023-03-28 21:33:03) postgres=# set role to default;\n> SET\n> (2023-03-28 21:33:12) postgres=# set role to t2role;\n> SET\n> (2023-03-28 21:33:19) postgres=# select * from accounts ;\n> ┌─────────┬─────────┬────────────────┐\n> │ manager │ company │ contact_email │\n> ╞═════════╪═════════╪════════════════╡\n> │ t2role │ yyy │ t2role@yyy.org │\n> └─────────┴─────────┴────────────────┘\n> (1 row)\n>\n> (2023-03-28 21:33:22) postgres=# let v = (select company from accounts);\n> LET\n> (2023-03-28 21:33:26) postgres=# select v;\n> ┌─────┐\n> │ v │\n> ╞═════╡\n> │ yyy │\n> └─────┘\n> (1 row)\n\nHm, but isn't the row level security enforced here on the select level,\nnot when assigning some value via LET? Plus, it seems the comment\noriginally refer to the command types (CMD_SELECT, etc), and there is no\nCMD_LET and no need for it, right?\n\nI'm just trying to understand if there was anything special done for\nsession variables in this regard, and if not, the commentary change\nseems to be not needed (I know, I know, it's totally nitpicking).\n\n\n", "msg_date": "Fri, 31 Mar 2023 21:29:46 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "pá 31. 3. 2023 v 21:31 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\nnapsal:\n\n> > On Tue, Mar 28, 2023 at 09:34:20PM +0200, Pavel Stehule wrote:\n> > Hi\n> >\n> > > Talking about documentation I've noticed that the implementation\n> > > contains few limitations, that are not mentioned in the docs. Examples\n> > > are WITH queries:\n> > >\n> > > WITH x AS (LET public.svar = 100) SELECT * FROM x;\n> > > ERROR: LET not supported in WITH query\n> > >\n> >\n> > The LET statement doesn't support the RETURNING clause, so using inside\n> > CTE does not make any sense.\n> >\n> > Do you have some tips, where this behaviour should be mentioned?\n>\n> Yeah, you're right, it's probably not worth adding. I usually find it a\n> good idea to explicitly mention any limitations, but WITH docs are\n> actually have one line about statements without the RETURNING clause,\n> plus indeed for LET it makes even less sense.\n>\n> > > and using with set-returning functions (haven't found any related\n> tests).\n> > >\n> >\n> > There it is:\n> >\n> > +CREATE VARIABLE public.svar AS int;\n> > +-- should be ok\n> > +LET public.svar = generate_series(1, 1);\n> > +-- should fail\n> > +LET public.svar = generate_series(1, 2);\n> > +ERROR: expression returned more than one row\n> > +LET public.svar = generate_series(1, 0);\n> > +ERROR: expression returned no rows\n> > +DROP VARIABLE public.svar;\n>\n> Oh, interesting. I was looking for another error message from\n> parse_func.c:\n>\n> set-returning functions are not allowed in LET assignment expression\n>\n> Is this one you've posted somehow different?\n>\n\nThis limit is correct, but the error message is maybe messy - I changed it.\n\nThis is protection against:\n\n(2023-04-01 06:25:50) postgres=# create variable xxx as int[];\nCREATE VARIABLE\n(2023-04-01 06:26:02) postgres=# let xxx[generate_series(1,3)] = 10;\nERROR: set-returning functions are not allowed in LET assignment expression\nLINE 1: let xxx[generate_series(1,3)] = 10;\n ^\n\nchange:\n case EXPR_KIND_LET_TARGET:\n- err = _(\"set-returning functions are not allowed in LET\nassignment expression\");\n+ err = _(\"set-returning functions are not allowed in LET target\nexpression\");\n break;\n\nThis case was not tested - so I did new test for this case\n\n\n> > > Another small note is about this change in the rowsecurity:\n> > >\n> > > /*\n> > > - * For SELECT, UPDATE and DELETE, add security quals to enforce\n> > > the USING\n> > > - * policies. These security quals control access to existing\n> > > table rows.\n> > > - * Restrictive policies are combined together using AND, and\n> > > permissive\n> > > - * policies are combined together using OR.\n> > > + * For SELECT, LET, UPDATE and DELETE, add security quals to\n> > > enforce the\n> > > + * USING policies. These security quals control access to\n> > > existing table\n> > > + * rows. Restrictive policies are combined together using AND,\n> and\n> > > + * permissive policies are combined together using OR.\n> > > */\n> > >\n> > > From this commentary one may think that LET command supports row level\n> > > security, but I don't see it being implemented. A wrong commentary?\n> > >\n> >\n> > I don't think so. The row level security should be supported. I tested\n> it\n> > on example from doc:\n> >\n> > [...]\n> >\n> > (2023-03-28 21:32:33) postgres=# set role to t1role;\n> > SET\n> > (2023-03-28 21:32:40) postgres=# select * from accounts ;\n> > ┌─────────┬─────────┬────────────────┐\n> > │ manager │ company │ contact_email │\n> > ╞═════════╪═════════╪════════════════╡\n> > │ t1role │ xxx │ t1role@xxx.org │\n> > └─────────┴─────────┴────────────────┘\n> > (1 row)\n> >\n> > (2023-03-28 21:32:45) postgres=# let v = (select company from accounts);\n> > LET\n> > (2023-03-28 21:32:58) postgres=# select v;\n> > ┌─────┐\n> > │ v │\n> > ╞═════╡\n> > │ xxx │\n> > └─────┘\n> > (1 row)\n> >\n> > (2023-03-28 21:33:03) postgres=# set role to default;\n> > SET\n> > (2023-03-28 21:33:12) postgres=# set role to t2role;\n> > SET\n> > (2023-03-28 21:33:19) postgres=# select * from accounts ;\n> > ┌─────────┬─────────┬────────────────┐\n> > │ manager │ company │ contact_email │\n> > ╞═════════╪═════════╪════════════════╡\n> > │ t2role │ yyy │ t2role@yyy.org │\n> > └─────────┴─────────┴────────────────┘\n> > (1 row)\n> >\n> > (2023-03-28 21:33:22) postgres=# let v = (select company from accounts);\n> > LET\n> > (2023-03-28 21:33:26) postgres=# select v;\n> > ┌─────┐\n> > │ v │\n> > ╞═════╡\n> > │ yyy │\n> > └─────┘\n> > (1 row)\n>\n> Hm, but isn't the row level security enforced here on the select level,\n> not when assigning some value via LET? Plus, it seems the comment\n> originally refer to the command types (CMD_SELECT, etc), and there is no\n> CMD_LET and no need for it, right?\n>\n> I'm just trying to understand if there was anything special done for\n> session variables in this regard, and if not, the commentary change\n> seems to be not needed (I know, I know, it's totally nitpicking).\n>\n\nI am not sure at this point. It is true, so it doesn't modify any lines\nthere, and this is the reason why this comment is maybe messy.\n\nI'll remove it.\n\np.s. I am sending an updated patch still in the old format. Refactoring to\na new format for Peter can take some time, and the patch in the old format\ncan be available for people who can do some tests or some checks.\n\n\n\nRegards\n\nPavel", "msg_date": "Sat, 1 Apr 2023 07:21:03 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Sun, 26 Mar 2023 at 07:34, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> This feature can significantly increase log size, so it's disabled by default.\n> For testing or development environments it's recommended to enable it if you\n> use session variables.\n\nI think it's generally not practical to have warnings for valid DML.\nEffectively warnings in DML are errors since they make the syntax just\nunusable. I suppose it's feasible to have it as a debugging option\nthat defaults to off but I'm not sure it's really useful.\n\nI suppose it raises the question of whether session variables should\nbe in pg_class and be in the same namespace as tables so that\ncollisions are impossible. I haven't looked at the code to see if\nthat's feasible or reasonable. But this feels a bit like what happened\nwith sequences where they used to be a wholly special thing and later\nwe realized everything was simpler if they were just a kind of\nrelation.\n\n-- \ngreg\n\n\n", "msg_date": "Wed, 5 Apr 2023 13:19:42 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "st 5. 4. 2023 v 19:20 odesílatel Greg Stark <stark@mit.edu> napsal:\n\n> On Sun, 26 Mar 2023 at 07:34, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > This feature can significantly increase log size, so it's disabled by\n> default.\n> > For testing or development environments it's recommended to enable it if\n> you\n> > use session variables.\n>\n> I think it's generally not practical to have warnings for valid DML.\n> Effectively warnings in DML are errors since they make the syntax just\n> unusable. I suppose it's feasible to have it as a debugging option\n> that defaults to off but I'm not sure it's really useful.\n>\n\nIt is a tool that should help with collision detection. Without it, it can\nbe pretty hard to detect it. It is similar to plpgsql's extra warnings.\n\n\n> I suppose it raises the question of whether session variables should\n> be in pg_class and be in the same namespace as tables so that\n> collisions are impossible. I haven't looked at the code to see if\n> that's feasible or reasonable. But this feels a bit like what happened\n> with sequences where they used to be a wholly special thing and later\n> we realized everything was simpler if they were just a kind of\n> relation.\n>\n\nThe first patch did it. But at the end, it doesn't reduce conflicts,\nbecause usually the conflicts are between variables and table's attributes\n(columns).\n\nexample\n\ncreate variable a as int;\ncreate table foo(a int);\n\nselect a from foo; -- the \"a\" is ambiguous, variable \"a\" is shadowed\n\nThis is a basic case, and the unique names don't help. The variables are\nmore aggressive in namespace than tables, because they don't require be in\nFROM clause. This is the reason why we specify so variables are always\nshadowed. Only this behaviour is safe and robust. I cannot break any query\n(that doesn't use variables) by creating any variable. On second hand, an\nexperience from Oracle's PL/SQL or from old PLpgSQL is, so unwanted\nshadowing can be hard to investigate (without some tools).\n\nPL/pgSQL doesn't allow conflict between PL/pgSQL variables, and SQL (now),\nand I think so it is best. But the scope of PLpgSQL variables is relatively\nsmall, so very strict behaviour is acceptable.\n\nThe session variables are some between tables and attributes. The catalog\npg_class can be enhanced about columns for variables, but it does a lot\nnow, so I think it is not practical.\n\n\nRegards\n\nPavel\n\n\n\n>\n> --\n> greg\n>\n\nst 5. 4. 2023 v 19:20 odesílatel Greg Stark <stark@mit.edu> napsal:On Sun, 26 Mar 2023 at 07:34, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> This feature can significantly increase log size, so it's disabled by default.\n> For testing or development environments it's recommended to enable it if you\n> use session variables.\n\nI think it's generally not practical to have warnings for valid DML.\nEffectively warnings in DML are errors since they make the syntax just\nunusable. I suppose it's feasible to have it as a debugging option\nthat defaults to off but I'm not sure it's really useful.It is a tool that should help with collision detection.  Without it, it can be pretty hard to detect it. It is similar to plpgsql's extra warnings.\n\nI suppose it raises the question of whether session variables should\nbe in pg_class and be in the same namespace as tables so that\ncollisions are impossible. I haven't looked at the code to see if\nthat's feasible or reasonable. But this feels a bit like what happened\nwith sequences where they used to be a wholly special thing and later\nwe realized everything was simpler if they were just a kind of\nrelation.The first patch did it. But at the end, it doesn't reduce conflicts, because usually the conflicts are between variables and table's attributes (columns).examplecreate variable a as int;create table foo(a int);select a from foo; -- the \"a\" is ambiguous, variable \"a\" is shadowedThis is a basic case, and the unique names don't help. The variables are more aggressive in namespace than tables, because they don't require be in FROM clause. This is the reason why we specify so variables are always shadowed. Only this behaviour is safe and robust. I cannot break any query (that doesn't use variables) by creating any variable. On second hand, an experience from Oracle's PL/SQL or from old PLpgSQL is, so unwanted shadowing can be hard to investigate (without some tools).PL/pgSQL doesn't allow conflict between PL/pgSQL variables, and SQL (now), and I think so it is best. But the scope of PLpgSQL variables is relatively small, so very strict behaviour is acceptable.The session variables are some between tables and attributes. The catalog pg_class can be enhanced about columns for variables, but it does a lot now, so I think it is not practical. RegardsPavel \n\n-- \ngreg", "msg_date": "Wed, 5 Apr 2023 19:58:02 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Thu, Apr 6, 2023 at 1:58 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> st 5. 4. 2023 v 19:20 odesílatel Greg Stark <stark@mit.edu> napsal:\n>>\n>> On Sun, 26 Mar 2023 at 07:34, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>> >\n>> > This feature can significantly increase log size, so it's disabled by default.\n>> > For testing or development environments it's recommended to enable it if you\n>> > use session variables.\n>>\n>> I think it's generally not practical to have warnings for valid DML.\n>> Effectively warnings in DML are errors since they make the syntax just\n>> unusable. I suppose it's feasible to have it as a debugging option\n>> that defaults to off but I'm not sure it's really useful.\n>\n>\n> It is a tool that should help with collision detection. Without it, it can be pretty hard to detect it. It is similar to plpgsql's extra warnings.\n\nAnother example is escape_string_warning, which can also emit warning\nfor valid DML. I once had to fix some random framework that a\nprevious employer was using, in order to move to a more recent pg\nversion and have standard_conforming_strings on, and having\nescape_string_warning was quite helpful.\n\n\n", "msg_date": "Thu, 6 Apr 2023 23:40:56 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Wed, Apr 5, 2023 at 1:58 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> st 5. 4. 2023 v 19:20 odesílatel Greg Stark <stark@mit.edu> napsal:\n>\n>> On Sun, 26 Mar 2023 at 07:34, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>> >\n>> > This feature can significantly increase log size, so it's disabled by\n>> default.\n>> > For testing or development environments it's recommended to enable it\n>> if you\n>> > use session variables.\n>>\n>> I think it's generally not practical to have warnings for valid DML.\n>> Effectively warnings in DML are errors since they make the syntax just\n>> unusable. I suppose it's feasible to have it as a debugging option\n>> that defaults to off but I'm not sure it's really useful.\n>>\n>\n> It is a tool that should help with collision detection. Without it, it\n> can be pretty hard to detect it. It is similar to plpgsql's extra warnings.\n>\n>\n>> I suppose it raises the question of whether session variables should\n>> be in pg_class and be in the same namespace as tables so that\n>> collisions are impossible. I haven't looked at the code to see if\n>> that's feasible or reasonable. But this feels a bit like what happened\n>> with sequences where they used to be a wholly special thing and later\n>> we realized everything was simpler if they were just a kind of\n>> relation.\n>>\n>\n> The first patch did it. But at the end, it doesn't reduce conflicts,\n> because usually the conflicts are between variables and table's attributes\n> (columns).\n>\n> example\n>\n> create variable a as int;\n> create table foo(a int);\n>\n> select a from foo; -- the \"a\" is ambiguous, variable \"a\" is shadowed\n>\n> This is a basic case, and the unique names don't help. The variables are\n> more aggressive in namespace than tables, because they don't require be in\n> FROM clause. This is the reason why we specify so variables are always\n> shadowed. Only this behaviour is safe and robust. I cannot break any query\n> (that doesn't use variables) by creating any variable. On second hand, an\n> experience from Oracle's PL/SQL or from old PLpgSQL is, so unwanted\n> shadowing can be hard to investigate (without some tools).\n>\n> PL/pgSQL doesn't allow conflict between PL/pgSQL variables, and SQL (now),\n> and I think so it is best. But the scope of PLpgSQL variables is relatively\n> small, so very strict behaviour is acceptable.\n>\n> The session variables are some between tables and attributes. The catalog\n> pg_class can be enhanced about columns for variables, but it does a lot\n> now, so I think it is not practical.\n>\n>>\n>> I agree about shadowing schema variables. But is there no way to fix\nthat so that you can dereference the variable?\n[Does an Alias work inside a procedure against a schema var?]\nDoes adding a schema prefix resolve it properly, so your example, I could\ndo:\nSELECT schema_var.a AS var_a, a as COL_A from t;\n\nAgain, I like the default that it is hidden, but I can envision needing\nboth?\n\nRegards, Kirk\n\nOn Wed, Apr 5, 2023 at 1:58 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:st 5. 4. 2023 v 19:20 odesílatel Greg Stark <stark@mit.edu> napsal:On Sun, 26 Mar 2023 at 07:34, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> This feature can significantly increase log size, so it's disabled by default.\n> For testing or development environments it's recommended to enable it if you\n> use session variables.\n\nI think it's generally not practical to have warnings for valid DML.\nEffectively warnings in DML are errors since they make the syntax just\nunusable. I suppose it's feasible to have it as a debugging option\nthat defaults to off but I'm not sure it's really useful.It is a tool that should help with collision detection.  Without it, it can be pretty hard to detect it. It is similar to plpgsql's extra warnings.\n\nI suppose it raises the question of whether session variables should\nbe in pg_class and be in the same namespace as tables so that\ncollisions are impossible. I haven't looked at the code to see if\nthat's feasible or reasonable. But this feels a bit like what happened\nwith sequences where they used to be a wholly special thing and later\nwe realized everything was simpler if they were just a kind of\nrelation.The first patch did it. But at the end, it doesn't reduce conflicts, because usually the conflicts are between variables and table's attributes (columns).examplecreate variable a as int;create table foo(a int);select a from foo; -- the \"a\" is ambiguous, variable \"a\" is shadowedThis is a basic case, and the unique names don't help. The variables are more aggressive in namespace than tables, because they don't require be in FROM clause. This is the reason why we specify so variables are always shadowed. Only this behaviour is safe and robust. I cannot break any query (that doesn't use variables) by creating any variable. On second hand, an experience from Oracle's PL/SQL or from old PLpgSQL is, so unwanted shadowing can be hard to investigate (without some tools).PL/pgSQL doesn't allow conflict between PL/pgSQL variables, and SQL (now), and I think so it is best. But the scope of PLpgSQL variables is relatively small, so very strict behaviour is acceptable.The session variables are some between tables and attributes. The catalog pg_class can be enhanced about columns for variables, but it does a lot now, so I think it is not practical. I agree about shadowing schema variables.  But is there no way to fix that so that you can dereference the variable?[Does an Alias work inside a procedure against a schema var?]Does adding a schema prefix resolve it  properly, so your example, I could do:SELECT schema_var.a AS var_a, a as COL_A from t;Again, I like the default that it is hidden, but I can envision needing both?Regards, Kirk", "msg_date": "Thu, 6 Apr 2023 13:17:23 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": ">\n>\n>> example\n>>\n>> create variable a as int;\n>> create table foo(a int);\n>>\n>> select a from foo; -- the \"a\" is ambiguous, variable \"a\" is shadowed\n>>\n>> This is a basic case, and the unique names don't help. The variables are\n>> more aggressive in namespace than tables, because they don't require be in\n>> FROM clause. This is the reason why we specify so variables are always\n>> shadowed. Only this behaviour is safe and robust. I cannot break any query\n>> (that doesn't use variables) by creating any variable. On second hand, an\n>> experience from Oracle's PL/SQL or from old PLpgSQL is, so unwanted\n>> shadowing can be hard to investigate (without some tools).\n>>\n>> PL/pgSQL doesn't allow conflict between PL/pgSQL variables, and SQL\n>> (now), and I think so it is best. But the scope of PLpgSQL variables is\n>> relatively small, so very strict behaviour is acceptable.\n>>\n>> The session variables are some between tables and attributes. The catalog\n>> pg_class can be enhanced about columns for variables, but it does a lot\n>> now, so I think it is not practical.\n>>\n>>>\n>>> I agree about shadowing schema variables. But is there no way to fix\n> that so that you can dereference the variable?\n> [Does an Alias work inside a procedure against a schema var?]\n> Does adding a schema prefix resolve it properly, so your example, I could\n> do:\n> SELECT schema_var.a AS var_a, a as COL_A from t;\n>\n\nYes, using schema can fix collisions in almost all cases. There are some\npossible cases, when the schema name is the same as some variable name, and\nin these cases there can still be collisions (and still there is a\npossibility to use catalog.schema.object and it can fix a collision). You\ncan use a qualified identifier and again in most cases it fixes collisions.\nThese cases are tested in regression tests.\n\nRegards\n\nPavel\n\n\n> Again, I like the default that it is hidden, but I can envision needing\n> both?\n>\n> Regards, Kirk\n>\n\nexamplecreate variable a as int;create table foo(a int);select a from foo; -- the \"a\" is ambiguous, variable \"a\" is shadowedThis is a basic case, and the unique names don't help. The variables are more aggressive in namespace than tables, because they don't require be in FROM clause. This is the reason why we specify so variables are always shadowed. Only this behaviour is safe and robust. I cannot break any query (that doesn't use variables) by creating any variable. On second hand, an experience from Oracle's PL/SQL or from old PLpgSQL is, so unwanted shadowing can be hard to investigate (without some tools).PL/pgSQL doesn't allow conflict between PL/pgSQL variables, and SQL (now), and I think so it is best. But the scope of PLpgSQL variables is relatively small, so very strict behaviour is acceptable.The session variables are some between tables and attributes. The catalog pg_class can be enhanced about columns for variables, but it does a lot now, so I think it is not practical. I agree about shadowing schema variables.  But is there no way to fix that so that you can dereference the variable?[Does an Alias work inside a procedure against a schema var?]Does adding a schema prefix resolve it  properly, so your example, I could do:SELECT schema_var.a AS var_a, a as COL_A from t;Yes, using schema can fix collisions in almost all cases. There are some possible cases, when the schema name is the same as some variable name, and in these cases there can still be collisions (and still there is a possibility to use catalog.schema.object and it can fix a collision). You can use a qualified identifier and again in most cases it fixes collisions. These cases are tested in regression tests.RegardsPavelAgain, I like the default that it is hidden, but I can envision needing both?Regards, Kirk", "msg_date": "Thu, 6 Apr 2023 19:28:49 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Thu, Mar 30, 2023 at 4:06 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> Hi\n>\n> ne 26. 3. 2023 v 19:44 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\n> napsal:\n>\n>> > On Fri, Mar 24, 2023 at 08:04:08AM +0100, Pavel Stehule wrote:\n>> > čt 23. 3. 2023 v 19:54 odesílatel Pavel Stehule <\n>> pavel.stehule@gmail.com>\n>> > napsal:\n>> >\n>> > > čt 23. 3. 2023 v 16:33 odesílatel Peter Eisentraut <\n>> > > peter.eisentraut@enterprisedb.com> napsal:\n>> > >\n>> > >> The other issue is that by its nature this patch adds a lot of code\n>> in a\n>> > >> lot of places. Large patches are more likely to be successful if\n>> they\n>> ...\n>> I agree, the patch scale is a bit overwhelming. It's worth noting that\n>> due to the nature of this change certain heavy lifting has to be done in\n>> any case, plus I've got an impression that some part of the patch are\n>> quite solid (although I haven't reviewed everything, did anyone achieve\n>> that milestone?). But still, it would be of great help to simplify the\n>> current implementation, and I'm afraid the only way of doing this is to\n>> make trades-off about functionality vs change size & complexity.\n>>\n>\n> There is not too much space for reduction - more - sometimes there is code\n> reuse between features.\n>\n> I can reduce temporary session variables, but the same AtSubXact routines\n> are used by memory purging routines, and if only if you drop all dependent\n> features, then you can get some interesting number of reduced lines. I can\n> imagine very reduced feature set like\n>\n> 1) no temporary variables, no reset at transaction end\n> 2) without default expressions - default is null\n> 3) direct memory cleaning on drop (without possibility of saved value\n> after reverted drop) or cleaning at session end always\n>\n> Note - @1 and @3 shares code\n>\n> Please don't remove #2. With Default Values, I was eyeballing these as\npseudo constants. I find I have a DRY (Don't Repeat Yourself) issue in our\ncurrent code base (PLPGSQL) because of the lack of shared constants\nthroughout the application layer. We literally created a CONST schema with\nSQL functions that return a set value. It's kludgy, but clear enough. (We\nhave approximately 50 of these).\n\nRegards, Kirk\n\nOn Thu, Mar 30, 2023 at 4:06 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:Hine 26. 3. 2023 v 19:44 odesílatel Dmitry Dolgov <9erthalion6@gmail.com> napsal:> On Fri, Mar 24, 2023 at 08:04:08AM +0100, Pavel Stehule wrote:\n> čt 23. 3. 2023 v 19:54 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n> > čt 23. 3. 2023 v 16:33 odesílatel Peter Eisentraut <\n> > peter.eisentraut@enterprisedb.com> napsal:\n> >\n> >> The other issue is that by its nature this patch adds a lot of code in a\n> >> lot of places.  Large patches are more likely to be successful if they...\nI agree, the patch scale is a bit overwhelming. It's worth noting that\ndue to the nature of this change certain heavy lifting has to be done in\nany case, plus I've got an impression that some part of the patch are\nquite solid (although I haven't reviewed everything, did anyone achieve\nthat milestone?). But still, it would be of great help to simplify the\ncurrent implementation, and I'm afraid the only way of doing this is to\nmake trades-off about functionality vs change size & complexity.There is not too much space for reduction - more - sometimes there is code reuse between features.I can reduce temporary session variables, but the same AtSubXact routines are used by memory purging routines, and if only if  you drop all dependent features, then you can get some interesting number of reduced lines. I can imagine very reduced feature set like1) no temporary variables, no reset at transaction end2) without default expressions - default is null3) direct memory cleaning on drop (without possibility of saved value after reverted drop) or cleaning at session end alwaysNote - @1 and @3 shares codePlease don't remove #2.  With Default Values, I was eyeballing these as pseudo constants.  I find I have a DRY (Don't Repeat Yourself) issue in our current code base (PLPGSQL) because of the lack of shared constants throughout the application layer.  We literally created a CONST schema with SQL functions that return a set value.  It's kludgy, but clear enough.  (We have approximately 50 of these).Regards, Kirk", "msg_date": "Thu, 6 Apr 2023 23:13:03 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nstill in old layout - but it can be useful for testing by someone\n\nfix build doc, fix regress tests\n\nRegards\n\nPavel", "msg_date": "Tue, 16 May 2023 20:11:14 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\n\nút 16. 5. 2023 v 20:11 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> still in old layout - but it can be useful for testing by someone\n>\n> fix build doc, fix regress tests\n>\n\nfresh rebase\n\n\n\n>\n> Regards\n>\n> Pavel\n>", "msg_date": "Wed, 17 May 2023 05:20:22 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nčt 30. 3. 2023 v 15:40 odesílatel Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> napsal:\n\n> On 30.03.23 10:49, Pavel Stehule wrote:\n> > If I reorganize the patch to the following structure, can be it useful\n> > for you?\n> >\n> > 1. really basic functionality (no temporary variables, no def\n> > expressions, no memory cleaning)\n> > SELECT variable\n> > LET should be supported + doc, + related tests.\n> >\n> > 2. support for temporary variables (session, transaction scope),\n> > memory cleaning at the end of transaction\n> >\n> > 3. PL/pgSQL support\n> > 4. pg_dump\n> > 5. shadowing warning\n> > 6. ... others ...\n>\n\nI am sending a refactorized patch. Mainly I rewrote memory cleaning - now\nit should be more robust and more simple (no more mem alloc in sinval\nhandler). Against the previous patch, only the syntax \"LET var = DEFAULT\"\nis not supported. I don't think it should be supported now. These patches\nare incremental - every patch contains related doc, regress tests and can\nbe tested incrementally.\n\nNew organization\n\n1. basic CREATE VARIABLE, DROP VARIABLE, GRANT, REVOKE, ALTER, pg_dump\n2. basic SELECT var, LET var = value\n3. DISCARD VARIABLES\n4. cleaning memory used by dropped variables\n5. temporary variables + ON COMMIT DROP clause support\n6. ON TRANSACTION END RESET clause support\n7. DEFAULT expr clause support\n8. support NOT NULL and IMMUTABLE clauses\n9. use message \"column or variable doesn't exists\" instead \"column doesn't\nexists\"\n\nRegards\n\nPavel\n\n\n\n>\n> That seems like an ok approach. The pg_dump support should probably go\n> into the first patch, so it's self-contained.\n>", "msg_date": "Thu, 22 Jun 2023 19:59:30 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nNew organization\n>\n> 1. basic CREATE VARIABLE, DROP VARIABLE, GRANT, REVOKE, ALTER, pg_dump\n> 2. basic SELECT var, LET var = value\n> 3. DISCARD VARIABLES\n> 4. cleaning memory used by dropped variables\n> 5. temporary variables + ON COMMIT DROP clause support\n> 6. ON TRANSACTION END RESET clause support\n> 7. DEFAULT expr clause support\n> 8. support NOT NULL and IMMUTABLE clauses\n> 9. use message \"column or variable doesn't exists\" instead \"column doesn't\n> exists\"\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n\nfix tests and meson test configuration\n\nRegards\n\nPavel", "msg_date": "Fri, 23 Jun 2023 07:28:39 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nfresh rebase\n\nRegards\n\nPavel", "msg_date": "Sat, 1 Jul 2023 07:13:15 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nfix warning\n\n+WARNING: roles created by regression test cases should have names\nstarting with \"regress_\"", "msg_date": "Thu, 6 Jul 2023 16:21:31 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nonly rebase\n\nRegards\n\nPavel", "msg_date": "Mon, 10 Jul 2023 09:23:58 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nfresh rebase\n\nRegards\n\nPavel", "msg_date": "Sat, 22 Jul 2023 20:28:31 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nfresh rebase\n\nregards\n\nPavel", "msg_date": "Thu, 3 Aug 2023 08:15:13 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "> On Thu, Aug 03, 2023 at 08:15:13AM +0200, Pavel Stehule wrote:\n> Hi\n>\n> fresh rebase\n\nThanks for continuing efforts. The new patch structure looks better to\nme (although the boundary between patches 0001 and 0002 is somewhat\nfuzzy, e.g. the function NameListToString is used already in the first\none, but defined in the second). Couple of commentaries along the way:\n\n* Looks like it's common to use BKI_DEFAULT when defining catalog\nentities, something like BKI_DEFAULT(-1) for typmod, BKI_DEFAULT(0) for\ncollation, etc. Does it make sense to put few default values into\npg_variable as well?\n\n* The first patch contains:\n\n diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c\n @@ -2800,6 +2800,8 @@ AbortTransaction(void)\n AtAbort_Portals();\n smgrDoPendingSyncs(false, is_parallel_worker);\n AtEOXact_LargeObject(false);\n +\n +\t/* 'false' means it's abort */\n AtAbort_Notify();\n AtEOXact_RelationMap(false, is_parallel_worker);\n AtAbort_Twophase();\n\nWhat does the commentary refer to, is it needed?\n\n* I see ExplainOneQuery got a new argument:\n\n static void ExplainOneQuery(Query *query, int cursorOptions,\n -\t\t\t\t\t\t\tIntoClause *into, ExplainState *es,\n +\t\t\t\t\t\t\tIntoClause *into, Oid targetvar, ExplainState *es,\n const char *queryString, ParamListInfo params,\n QueryEnvironment *queryEnv);\n\n From what I understand it represents a potential session variable to be\nexplained. Isn't it too specific for this interface, could it be put\nsomewhere else? To be honest, I don't have any suggestions myself, but\nit feels a bit out of place here.\n\n* Session variable validity logic is not always clear, at least to me,\nproducing following awkward pieces of code:\n\n +\t\tif (!svar->is_valid)\n +\t\t{\n +\t\t\tif (is_session_variable_valid(svar))\n +\t\t\t\tsvar->is_valid = true;\n\nI get it as there are two ways how a variable could be invalid?\n\n* It's not always easy to follow which failure modes are taken care of. E.g.\n\n +\t * Don't try to use possibly invalid data from svar. And we don't want to\n +\t * overwrite invalid svar immediately. The datumCopy can fail, and in this\n +\t * case, the stored value will be invalid still.\n\nI couldn't find any similar precautions, how exactly datumCopy can fail,\nare you referring to palloc/memcpy failures?\n\nAnother confusing example was this one at the end of set_session_variable:\n\n +\t/*\n +\t * XXX While unlikely, an error here is possible. It wouldn't leak memory\n +\t * as the allocated chunk has already been correctly assigned to the\n +\t * session variable, but would contradict this function contract, which is\n +\t * that this function should either succeed or leave the current value\n +\t * untouched.\n +\t */\n +\telog(DEBUG1, \"session variable \\\"%s.%s\\\" (oid:%u) has new value\",\n +\t\t get_namespace_name(get_session_variable_namespace(svar->varid)),\n +\t\t get_session_variable_name(svar->varid),\n +\t\t svar->varid);\n\nIt's not clear, which exactly error you're talking about, it's the last\ninstruction in the function.\n\nMaybe it would be beneficial to have some overarching description, all\nin one place, about how session variables implementation handles various\nfailures?\n\n\n", "msg_date": "Fri, 11 Aug 2023 17:55:26 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Fri, Aug 11, 2023 at 05:55:26PM +0200, Dmitry Dolgov wrote:\n>\n> Another confusing example was this one at the end of set_session_variable:\n>\n> +\t/*\n> +\t * XXX While unlikely, an error here is possible. It wouldn't leak memory\n> +\t * as the allocated chunk has already been correctly assigned to the\n> +\t * session variable, but would contradict this function contract, which is\n> +\t * that this function should either succeed or leave the current value\n> +\t * untouched.\n> +\t */\n> +\telog(DEBUG1, \"session variable \\\"%s.%s\\\" (oid:%u) has new value\",\n> +\t\t get_namespace_name(get_session_variable_namespace(svar->varid)),\n> +\t\t get_session_variable_name(svar->varid),\n> +\t\t svar->varid);\n>\n> It's not clear, which exactly error you're talking about, it's the last\n> instruction in the function.\n\nFTR I think I'm the one that changed that. The error I was talking about is\nelog() itself (in case of OOM for instance), or even one of the get_* call, if\nrunning with log_level <= DEBUG1. It's clearly really unlikely but still\npossible, thus this comment which also tries to explain why this elog() is not\ndone earlier.\n\n\n", "msg_date": "Sat, 12 Aug 2023 09:28:19 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "> On Sat, Aug 12, 2023 at 09:28:19AM +0800, Julien Rouhaud wrote:\n> On Fri, Aug 11, 2023 at 05:55:26PM +0200, Dmitry Dolgov wrote:\n> >\n> > Another confusing example was this one at the end of set_session_variable:\n> >\n> > +\t/*\n> > +\t * XXX While unlikely, an error here is possible. It wouldn't leak memory\n> > +\t * as the allocated chunk has already been correctly assigned to the\n> > +\t * session variable, but would contradict this function contract, which is\n> > +\t * that this function should either succeed or leave the current value\n> > +\t * untouched.\n> > +\t */\n> > +\telog(DEBUG1, \"session variable \\\"%s.%s\\\" (oid:%u) has new value\",\n> > +\t\t get_namespace_name(get_session_variable_namespace(svar->varid)),\n> > +\t\t get_session_variable_name(svar->varid),\n> > +\t\t svar->varid);\n> >\n> > It's not clear, which exactly error you're talking about, it's the last\n> > instruction in the function.\n>\n> FTR I think I'm the one that changed that. The error I was talking about is\n> elog() itself (in case of OOM for instance), or even one of the get_* call, if\n> running with log_level <= DEBUG1. It's clearly really unlikely but still\n> possible, thus this comment which also tries to explain why this elog() is not\n> done earlier.\n\nI see, thanks for clarification. Absolutely nitpicking, but the crucial\n\"that's why this elog is not done earlier\" is only assumed in the\ncomment between the lines, not stated out loud :)\n\n\n", "msg_date": "Sat, 12 Aug 2023 13:20:03 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On Sat, Aug 12, 2023 at 01:20:03PM +0200, Dmitry Dolgov wrote:\n> > On Sat, Aug 12, 2023 at 09:28:19AM +0800, Julien Rouhaud wrote:\n> > On Fri, Aug 11, 2023 at 05:55:26PM +0200, Dmitry Dolgov wrote:\n> > >\n> > > Another confusing example was this one at the end of set_session_variable:\n> > >\n> > > +\t/*\n> > > +\t * XXX While unlikely, an error here is possible. It wouldn't leak memory\n> > > +\t * as the allocated chunk has already been correctly assigned to the\n> > > +\t * session variable, but would contradict this function contract, which is\n> > > +\t * that this function should either succeed or leave the current value\n> > > +\t * untouched.\n> > > +\t */\n> > > +\telog(DEBUG1, \"session variable \\\"%s.%s\\\" (oid:%u) has new value\",\n> > > +\t\t get_namespace_name(get_session_variable_namespace(svar->varid)),\n> > > +\t\t get_session_variable_name(svar->varid),\n> > > +\t\t svar->varid);\n> > >\n> > > It's not clear, which exactly error you're talking about, it's the last\n> > > instruction in the function.\n> >\n> > FTR I think I'm the one that changed that. The error I was talking about is\n> > elog() itself (in case of OOM for instance), or even one of the get_* call, if\n> > running with log_level <= DEBUG1. It's clearly really unlikely but still\n> > possible, thus this comment which also tries to explain why this elog() is not\n> > done earlier.\n>\n> I see, thanks for clarification. Absolutely nitpicking, but the crucial\n> \"that's why this elog is not done earlier\" is only assumed in the\n> comment between the lines, not stated out loud :)\n\nWell, yes although to be fair the original version of this had a prior comment\nthat was making it much more obvious:\n\n+ /*\n+ * No error should happen after this poiht, otherwise we could leak the\n+ * newly allocated value if any.\n+ */\n\n(which would maybe have been better said \"Nothing that can error out should be\ncalled after that point\"). After quite a lot of patch revisions it now simply\nsays:\n\n+\t/* We can overwrite old variable now. No error expected */\n\nI agree that a bit more explanation is needed, and maybe also reminding that\nthis is because all of that is done in a persistent memory context.\n\n\n", "msg_date": "Sat, 12 Aug 2023 20:00:55 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\npá 11. 8. 2023 v 17:58 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\nnapsal:\n\n> > On Thu, Aug 03, 2023 at 08:15:13AM +0200, Pavel Stehule wrote:\n> > Hi\n> >\n> > fresh rebase\n>\n> Thanks for continuing efforts. The new patch structure looks better to\n> me (although the boundary between patches 0001 and 0002 is somewhat\n> fuzzy, e.g. the function NameListToString is used already in the first\n> one, but defined in the second). Couple of commentaries along the way:\n>\n\nNameListToString is already buildin function. Do you think NamesFromList?\n\nThis is my oversight - there is just `+extern List *NamesFromList(List\n*names); ` line, but sure - it should be in 0002 patch\n\nfixed now\n\nFor all patches I tested the possibility to compile without following\npatches, but this issue was not reported by the compiler.\n\nFirst patch is related to the system catalog - so you can create, drop, and\nbackup session variables. Second patch is dedicated to possibility to store\nand use an value to session variable\n\n\n> * Looks like it's common to use BKI_DEFAULT when defining catalog\n> entities, something like BKI_DEFAULT(-1) for typmod, BKI_DEFAULT(0) for\n> collation, etc. Does it make sense to put few default values into\n> pg_variable as well?\n>\n\ndone\n\n\n> * The first patch contains:\n>\n> diff --git a/src/backend/access/transam/xact.c\n> b/src/backend/access/transam/xact.c\n> @@ -2800,6 +2800,8 @@ AbortTransaction(void)\n> AtAbort_Portals();\n> smgrDoPendingSyncs(false, is_parallel_worker);\n> AtEOXact_LargeObject(false);\n> +\n> + /* 'false' means it's abort */\n> AtAbort_Notify();\n> AtEOXact_RelationMap(false, is_parallel_worker);\n> AtAbort_Twophase();\n>\n> What does the commentary refer to, is it needed?\n>\n\nit was wrongly placed, it should be part as patch 0005, but it has not too\nvaluable benefit, so I removed it\n\n\n>\n> * I see ExplainOneQuery got a new argument:\n>\n> static void ExplainOneQuery(Query *query, int cursorOptions,\n> - IntoClause *into,\n> ExplainState *es,\n> + IntoClause *into,\n> Oid targetvar, ExplainState *es,\n> const char *queryString, ParamListInfo\n> params,\n> QueryEnvironment *queryEnv);\n>\n> From what I understand it represents a potential session variable to be\n> explained. Isn't it too specific for this interface, could it be put\n> somewhere else? To be honest, I don't have any suggestions myself, but\n> it feels a bit out of place here.\n>\n\nThe target session variable is pushed there to be used for creating\nVariableDestReceiver, that is necessary for workable LET command when\nEXPLAIN is used with ANALYZE clause.\n\nI reduced the changes now, but there should be still because the target\nsession variable should be pushed to ExplainOnePlan, but PlannedStmt has\nnot any access to the Query structure where the resultVariable is stored.\nBut I need to inject only ExplainOnePlan - no others. This is the same\nreason why ExplainOnePlan has an \"into\" argument. In other places I can use\nthe resultVariable from the \"query\" argument.\n\n* Session variable validity logic is not always clear, at least to me,\n> producing following awkward pieces of code:\n>\n> + if (!svar->is_valid)\n> + {\n> + if (is_session_variable_valid(svar))\n> + svar->is_valid = true;\n>\n> I get it as there are two ways how a variable could be invalid?\n>\n\nThe flag is_valid is set by sinval message processing or by DROP VARIABLE\ncommand.\n\nAll invalid variables should be removed by remove_invalid_session_variables\nfunction, but this function ignores variables dropped in the current\ntransaction (and this routine is called only once per transaction - it can\nbe expensive, because it iterates over all variables currently used in\nsession). The purpose of remove_invalid_session_variables inside\nget_session_variable is cleaning memory for dropped variables when the\nprevious transaction is aborted.\n\nBut there is a possibility to revert DROP VARIABLE by using savepoint\ninside one transaction. And in this case we can have invalid variable\n(after DROP VARIABLE), that is not removed by\nremove_invalid_session_variables, but can be valid (and it is validated\nafter is_session_variable_valid).\n\nThis is reggress test scenario\n\nBEGIN;\n CREATE TEMP VARIABLE var1 AS int ON COMMIT DROP;\n LET var1 = 100;\n SAVEPOINT s1;\n DROP VARIABLE var1;\n ROLLBACK TO s1;\n SELECT var1;\n var1.\n------\n 100\n(1 row)\n\nCOMMIT;\n\nI did new comment there, and modified little bit the logic\n\nattention: the logic is different before and after patch 0004 where memory\ncleaning is implemented\n\n\n\n> * It's not always easy to follow which failure modes are taken care of.\n> E.g.\n>\n> + * Don't try to use possibly invalid data from svar. And we don't\n> want to\n> + * overwrite invalid svar immediately. The datumCopy can fail, and\n> in this\n> + * case, the stored value will be invalid still.\n>\n\nThis comment is related to usage of svar->typbyval and svar->typbylen for\ndatumCopy. When we accept invalidation message\nfor some variable and then svar->is_valid is false, then we should not use\nthese values, and we should reread it from catalog\n(be executing setup_session_variable). It is done on auxiliary svar,\nbecause there is a possible risk of failure of datumCopy, and the\ncontract is unchanged passed svar, when any error happens.\n\nI changed the comment.\n\n\nI couldn't find any similar precautions, how exactly datumCopy can fail,\n> are you referring to palloc/memcpy failures?\n>\n\nI expected only palloc failure.\n\n\n>\n> Another confusing example was this one at the end of set_session_variable:\n>\n> + /*\n> + * XXX While unlikely, an error here is possible. It wouldn't leak\n> memory\n> + * as the allocated chunk has already been correctly assigned to\n> the\n> + * session variable, but would contradict this function contract,\n> which is\n> + * that this function should either succeed or leave the current\n> value\n> + * untouched.\n> + */\n> + elog(DEBUG1, \"session variable \\\"%s.%s\\\" (oid:%u) has new value\",\n> +\n> get_namespace_name(get_session_variable_namespace(svar->varid)),\n> + get_session_variable_name(svar->varid),\n> + svar->varid);\n>\n> It's not clear, which exactly error you're talking about, it's the last\n> instruction in the function.\n>\n> Maybe it would be beneficial to have some overarching description, all\n> in one place, about how session variables implementation handles various\n> failures?\n>\n\nCurrently, there are only two places where there can be some failure - one\nis related to set and datumCopy, a second to evaluation of default\nexpressions.\n\nAny other possible failures like domain's exception or not null exception\nhas not any impact on stored value.\n\nregards\n\nPavel", "msg_date": "Wed, 23 Aug 2023 16:02:44 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nunfortunately I didn't attach all patches\n\nagain\n\nPavel", "msg_date": "Wed, 23 Aug 2023 16:04:20 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nfresh rebase\n\nReards\n\nPavel", "msg_date": "Thu, 31 Aug 2023 20:50:02 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nonly rebase\n\nRegards\n\nPavel", "msg_date": "Thu, 28 Sep 2023 09:03:40 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nWhen I thought about global temporary tables, I got one maybe interesting\nidea. The one significant problem of global temporary tables is place for\nstoring info about size or column statistics.\n\nI think so these data can be stored simply in session variables. Any global\ntemporary table can get assigned one session variable, that can hold these\ndata.\n\nRegards\n\nPavel\n\nHiWhen I thought about global temporary tables, I got one maybe interesting idea. The one significant problem of global temporary tables is place for storing info about size or column statistics.I think so these data can be stored simply in session variables. Any global temporary table can get assigned one session variable, that can hold these data.RegardsPavel", "msg_date": "Tue, 17 Oct 2023 08:52:13 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nfresh rebase\n\nregards\n\nPavel", "msg_date": "Fri, 27 Oct 2023 17:58:14 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "> On Wed, Aug 23, 2023 at 04:02:44PM +0200, Pavel Stehule wrote:\n> NameListToString is already buildin function. Do you think NamesFromList?\n>\n> This is my oversight - there is just `+extern List *NamesFromList(List\n> *names); ` line, but sure - it should be in 0002 patch\n>\n> fixed now\n\nRight, thanks for fixing.\n\nI think there is a wrinkle with pg_session_variables function. It\nreturns nothing if sessionvars hash table is empty, which has two\nconsequences:\n\n* One might get confused about whether a variable is created,\n based on the information from the function. An expected behaviour, but\n could be considered a bad UX.\n\n =# CREATE VARIABLE var1 AS varchar;\n\n -- empty, is expected\n =# SELECT name, typname, can_select, can_update FROM pg_session_variables();\n name | typname | can_select | can_update\n ------+---------+------------+------------\n (0 rows)\n\n -- but one can't create a variable\n =# CREATE VARIABLE var1 AS varchar;\n ERROR: 42710: session variable \"var1\" already exists\n LOCATION: create_variable, pg_variable.c:102\n\n -- yet, suddenly after a select...\n =# SELECT var2;\n var2\n ------\n NULL\n (1 row)\n\n -- ... it's not empty\n =# SELECT name, typname, can_select, can_update FROM pg_sessio\n n_variables();\n name | typname | can_select | can_update\n ------+-------------------+------------+------------\n var2 | character varying | t | t\n (1 row)\n\n* Running a parallel query will end up returning an empty result even\n after accessing the variable.\n\n -- debug_parallel_query = 1 all the time\n =# CREATE VARIABLE var2 AS varchar;\n\n -- empty, is expected\n =# SELECT name, typname, can_select, can_update FROM pg_session_variables();\n name | typname | can_select | can_update\n ------+---------+------------+------------\n (0 rows)\n\n -- but this time an access...\n SELECT var2;\n var2\n ------\n NULL\n (1 row)\n\n -- or set...\n =# LET var2 = 'test';\n\n -- doesn't change the result, it's still empty\n =# SELECT name, typname, can_select, can_update FROM pg_session_variables();\n name | typname | can_select | can_update\n ------+---------+------------+------------\n (0 rows)\n\nWould it be a problem to make pg_session_variables inspect the catalog\nor something similar if needed?\n\n\n", "msg_date": "Fri, 17 Nov 2023 20:17:32 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\npá 17. 11. 2023 v 20:17 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\nnapsal:\n\n> > On Wed, Aug 23, 2023 at 04:02:44PM +0200, Pavel Stehule wrote:\n> > NameListToString is already buildin function. Do you think NamesFromList?\n> >\n> > This is my oversight - there is just `+extern List *NamesFromList(List\n> > *names); ` line, but sure - it should be in 0002 patch\n> >\n> > fixed now\n>\n> Right, thanks for fixing.\n>\n> I think there is a wrinkle with pg_session_variables function. It\n> returns nothing if sessionvars hash table is empty, which has two\n> consequences:\n>\n> * One might get confused about whether a variable is created,\n> based on the information from the function. An expected behaviour, but\n> could be considered a bad UX.\n>\n> =# CREATE VARIABLE var1 AS varchar;\n>\n> -- empty, is expected\n> =# SELECT name, typname, can_select, can_update FROM\n> pg_session_variables();\n> name | typname | can_select | can_update\n> ------+---------+------------+------------\n> (0 rows)\n>\n> -- but one can't create a variable\n> =# CREATE VARIABLE var1 AS varchar;\n> ERROR: 42710: session variable \"var1\" already exists\n> LOCATION: create_variable, pg_variable.c:102\n>\n> -- yet, suddenly after a select...\n> =# SELECT var2;\n> var2\n> ------\n> NULL\n> (1 row)\n>\n> -- ... it's not empty\n> =# SELECT name, typname, can_select, can_update FROM pg_sessio\n> n_variables();\n> name | typname | can_select | can_update\n> ------+-------------------+------------+------------\n> var2 | character varying | t | t\n> (1 row)\n>\n> * Running a parallel query will end up returning an empty result even\n> after accessing the variable.\n>\n> -- debug_parallel_query = 1 all the time\n> =# CREATE VARIABLE var2 AS varchar;\n>\n> -- empty, is expected\n> =# SELECT name, typname, can_select, can_update FROM\n> pg_session_variables();\n> name | typname | can_select | can_update\n> ------+---------+------------+------------\n> (0 rows)\n>\n> -- but this time an access...\n> SELECT var2;\n> var2\n> ------\n> NULL\n> (1 row)\n>\n> -- or set...\n> =# LET var2 = 'test';\n>\n> -- doesn't change the result, it's still empty\n> =# SELECT name, typname, can_select, can_update FROM\n> pg_session_variables();\n> name | typname | can_select | can_update\n> ------+---------+------------+------------\n> (0 rows)\n>\n> Would it be a problem to make pg_session_variables inspect the catalog\n> or something similar if needed?\n>\n\nIt can be very easy to build pg_session_variables based on iteration over\nthe system catalog. But I am not sure if we want it. pg_session_variables()\nis designed to show the variables from session memory, and it is used for\ntesting. Originally it was named pg_debug_session_variables. If we iterate\nover catalog, it means using locks, and it can have an impact on isolation\ntests.\n\nSo maybe we can introduce a parameter for this function to show all session\nvariables (based on catalog) or only used based on iteration over memory.\nDefault can be \"all\". What do you think about it?\n\nThe difference between debug_parallel_query = 1 and debug_parallel_query =\n0 is strange - and I'll check it.\n\nHipá 17. 11. 2023 v 20:17 odesílatel Dmitry Dolgov <9erthalion6@gmail.com> napsal:> On Wed, Aug 23, 2023 at 04:02:44PM +0200, Pavel Stehule wrote:\n> NameListToString is already buildin function. Do you think NamesFromList?\n>\n> This is my oversight - there is just `+extern List *NamesFromList(List\n> *names); ` line, but sure - it should be in 0002 patch\n>\n> fixed now\n\nRight, thanks for fixing.\n\nI think there is a wrinkle with pg_session_variables function. It\nreturns nothing if sessionvars hash table is empty, which has two\nconsequences:\n\n* One might get confused about whether a variable is created,\n  based on the information from the function. An expected behaviour, but\n  could be considered a bad UX.\n\n    =# CREATE VARIABLE var1 AS varchar;\n\n    -- empty, is expected\n    =# SELECT name, typname, can_select, can_update FROM pg_session_variables();\n     name | typname | can_select | can_update\n     ------+---------+------------+------------\n     (0 rows)\n\n    -- but one can't create a variable\n    =# CREATE VARIABLE var1 AS varchar;\n    ERROR:  42710: session variable \"var1\" already exists\n    LOCATION:  create_variable, pg_variable.c:102\n\n    -- yet, suddenly after a select...\n    =# SELECT var2;\n     var2\n     ------\n      NULL\n      (1 row)\n\n    -- ... it's not empty\n    =# SELECT name, typname, can_select, can_update FROM pg_sessio\n    n_variables();\n     name |      typname      | can_select | can_update\n     ------+-------------------+------------+------------\n      var2 | character varying | t          | t\n      (1 row)\n\n* Running a parallel query will end up returning an empty result even\n  after accessing the variable.\n\n    -- debug_parallel_query = 1 all the time\n    =# CREATE VARIABLE var2 AS varchar;\n\n    -- empty, is expected\n    =# SELECT name, typname, can_select, can_update FROM pg_session_variables();\n     name | typname | can_select | can_update\n     ------+---------+------------+------------\n     (0 rows)\n\n    -- but this time an access...\n    SELECT var2;\n     var2\n     ------\n      NULL\n      (1 row)\n\n    -- or set...\n    =# LET var2 = 'test';\n\n    -- doesn't change the result, it's still empty\n    =# SELECT name, typname, can_select, can_update FROM pg_session_variables();\n     name | typname | can_select | can_update\n     ------+---------+------------+------------\n     (0 rows)\n\nWould it be a problem to make pg_session_variables inspect the catalog\nor something similar if needed?It can be very easy to build pg_session_variables based on iteration over the system catalog. But I am not sure if we want it. pg_session_variables() is designed to show the variables from session memory, and it is used for testing. Originally it was named pg_debug_session_variables. If we iterate over catalog, it means using locks, and it can have an impact on isolation tests. So maybe we can introduce a parameter for this function to show all session variables (based on catalog) or only used based on iteration over memory. Default can be \"all\". What do you think about it?The difference between debug_parallel_query = 1 and debug_parallel_query = 0 is strange - and I'll check it.", "msg_date": "Sat, 18 Nov 2023 14:19:09 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "so 18. 11. 2023 v 14:19 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> pá 17. 11. 2023 v 20:17 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\n> napsal:\n>\n>> > On Wed, Aug 23, 2023 at 04:02:44PM +0200, Pavel Stehule wrote:\n>> > NameListToString is already buildin function. Do you think\n>> NamesFromList?\n>> >\n>> > This is my oversight - there is just `+extern List *NamesFromList(List\n>> > *names); ` line, but sure - it should be in 0002 patch\n>> >\n>> > fixed now\n>>\n>> Right, thanks for fixing.\n>>\n>> I think there is a wrinkle with pg_session_variables function. It\n>> returns nothing if sessionvars hash table is empty, which has two\n>> consequences:\n>>\n>> * One might get confused about whether a variable is created,\n>> based on the information from the function. An expected behaviour, but\n>> could be considered a bad UX.\n>>\n>> =# CREATE VARIABLE var1 AS varchar;\n>>\n>> -- empty, is expected\n>> =# SELECT name, typname, can_select, can_update FROM\n>> pg_session_variables();\n>> name | typname | can_select | can_update\n>> ------+---------+------------+------------\n>> (0 rows)\n>>\n>> -- but one can't create a variable\n>> =# CREATE VARIABLE var1 AS varchar;\n>> ERROR: 42710: session variable \"var1\" already exists\n>> LOCATION: create_variable, pg_variable.c:102\n>>\n>> -- yet, suddenly after a select...\n>> =# SELECT var2;\n>> var2\n>> ------\n>> NULL\n>> (1 row)\n>>\n>> -- ... it's not empty\n>> =# SELECT name, typname, can_select, can_update FROM pg_sessio\n>> n_variables();\n>> name | typname | can_select | can_update\n>> ------+-------------------+------------+------------\n>> var2 | character varying | t | t\n>> (1 row)\n>>\n>> * Running a parallel query will end up returning an empty result even\n>> after accessing the variable.\n>>\n>> -- debug_parallel_query = 1 all the time\n>> =# CREATE VARIABLE var2 AS varchar;\n>>\n>> -- empty, is expected\n>> =# SELECT name, typname, can_select, can_update FROM\n>> pg_session_variables();\n>> name | typname | can_select | can_update\n>> ------+---------+------------+------------\n>> (0 rows)\n>>\n>> -- but this time an access...\n>> SELECT var2;\n>> var2\n>> ------\n>> NULL\n>> (1 row)\n>>\n>> -- or set...\n>> =# LET var2 = 'test';\n>>\n>> -- doesn't change the result, it's still empty\n>> =# SELECT name, typname, can_select, can_update FROM\n>> pg_session_variables();\n>> name | typname | can_select | can_update\n>> ------+---------+------------+------------\n>> (0 rows)\n>>\n>> Would it be a problem to make pg_session_variables inspect the catalog\n>> or something similar if needed?\n>>\n>\n> It can be very easy to build pg_session_variables based on iteration over\n> the system catalog. But I am not sure if we want it. pg_session_variables()\n> is designed to show the variables from session memory, and it is used for\n> testing. Originally it was named pg_debug_session_variables. If we iterate\n> over catalog, it means using locks, and it can have an impact on isolation\n> tests.\n>\n> So maybe we can introduce a parameter for this function to show all\n> session variables (based on catalog) or only used based on iteration over\n> memory. Default can be \"all\". What do you think about it?\n>\n> The difference between debug_parallel_query = 1 and debug_parallel_query =\n> 0 is strange - and I'll check it.\n>\n\nlooks so pg_session_variables() doesn't work in debug_paralel_query mode.\n\nso 18. 11. 2023 v 14:19 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:Hipá 17. 11. 2023 v 20:17 odesílatel Dmitry Dolgov <9erthalion6@gmail.com> napsal:> On Wed, Aug 23, 2023 at 04:02:44PM +0200, Pavel Stehule wrote:\n> NameListToString is already buildin function. Do you think NamesFromList?\n>\n> This is my oversight - there is just `+extern List *NamesFromList(List\n> *names); ` line, but sure - it should be in 0002 patch\n>\n> fixed now\n\nRight, thanks for fixing.\n\nI think there is a wrinkle with pg_session_variables function. It\nreturns nothing if sessionvars hash table is empty, which has two\nconsequences:\n\n* One might get confused about whether a variable is created,\n  based on the information from the function. An expected behaviour, but\n  could be considered a bad UX.\n\n    =# CREATE VARIABLE var1 AS varchar;\n\n    -- empty, is expected\n    =# SELECT name, typname, can_select, can_update FROM pg_session_variables();\n     name | typname | can_select | can_update\n     ------+---------+------------+------------\n     (0 rows)\n\n    -- but one can't create a variable\n    =# CREATE VARIABLE var1 AS varchar;\n    ERROR:  42710: session variable \"var1\" already exists\n    LOCATION:  create_variable, pg_variable.c:102\n\n    -- yet, suddenly after a select...\n    =# SELECT var2;\n     var2\n     ------\n      NULL\n      (1 row)\n\n    -- ... it's not empty\n    =# SELECT name, typname, can_select, can_update FROM pg_sessio\n    n_variables();\n     name |      typname      | can_select | can_update\n     ------+-------------------+------------+------------\n      var2 | character varying | t          | t\n      (1 row)\n\n* Running a parallel query will end up returning an empty result even\n  after accessing the variable.\n\n    -- debug_parallel_query = 1 all the time\n    =# CREATE VARIABLE var2 AS varchar;\n\n    -- empty, is expected\n    =# SELECT name, typname, can_select, can_update FROM pg_session_variables();\n     name | typname | can_select | can_update\n     ------+---------+------------+------------\n     (0 rows)\n\n    -- but this time an access...\n    SELECT var2;\n     var2\n     ------\n      NULL\n      (1 row)\n\n    -- or set...\n    =# LET var2 = 'test';\n\n    -- doesn't change the result, it's still empty\n    =# SELECT name, typname, can_select, can_update FROM pg_session_variables();\n     name | typname | can_select | can_update\n     ------+---------+------------+------------\n     (0 rows)\n\nWould it be a problem to make pg_session_variables inspect the catalog\nor something similar if needed?It can be very easy to build pg_session_variables based on iteration over the system catalog. But I am not sure if we want it. pg_session_variables() is designed to show the variables from session memory, and it is used for testing. Originally it was named pg_debug_session_variables. If we iterate over catalog, it means using locks, and it can have an impact on isolation tests. So maybe we can introduce a parameter for this function to show all session variables (based on catalog) or only used based on iteration over memory. Default can be \"all\". What do you think about it?The difference between debug_parallel_query = 1 and debug_parallel_query = 0 is strange - and I'll check it.  looks so  pg_session_variables() doesn't work  in debug_paralel_query mode.", "msg_date": "Sat, 18 Nov 2023 14:25:41 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": ">>\n>> The difference between debug_parallel_query = 1 and debug_parallel_query\n>> = 0 is strange - and I'll check it.\n>>\n>\n> looks so pg_session_variables() doesn't work in debug_paralel_query mode.\n>\n\nIt is marked as parallel safe, what is probably nonsense.\n\nThe difference between debug_parallel_query = 1 and debug_parallel_query = 0 is strange - and I'll check it.  looks so  pg_session_variables() doesn't work  in debug_paralel_query mode.It is marked as parallel safe, what is probably nonsense.", "msg_date": "Sat, 18 Nov 2023 14:27:40 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "> On Sat, Nov 18, 2023 at 02:19:09PM +0100, Pavel Stehule wrote:\n> > Would it be a problem to make pg_session_variables inspect the catalog\n> > or something similar if needed?\n> >\n>\n> It can be very easy to build pg_session_variables based on iteration over\n> the system catalog. But I am not sure if we want it. pg_session_variables()\n> is designed to show the variables from session memory, and it is used for\n> testing. Originally it was named pg_debug_session_variables. If we iterate\n> over catalog, it means using locks, and it can have an impact on isolation\n> tests.\n\nI see, thanks for clarification. In the end one can check the catalog\ndirectly of course, is there any other value in this function except for\ndebugging purposes?\n\nAs a side note, I'm intended to go one more time through the first few\npatches introducing the basic functionality, and then mark it as ready\nin CF. I can't break the patch in testing since quite long time, and for\nmost parts the changes make sense to me.\n\n\n", "msg_date": "Sat, 18 Nov 2023 15:50:41 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "so 18. 11. 2023 v 15:54 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\nnapsal:\n\n> > On Sat, Nov 18, 2023 at 02:19:09PM +0100, Pavel Stehule wrote:\n> > > Would it be a problem to make pg_session_variables inspect the catalog\n> > > or something similar if needed?\n> > >\n> >\n> > It can be very easy to build pg_session_variables based on iteration over\n> > the system catalog. But I am not sure if we want it.\n> pg_session_variables()\n> > is designed to show the variables from session memory, and it is used for\n> > testing. Originally it was named pg_debug_session_variables. If we\n> iterate\n> > over catalog, it means using locks, and it can have an impact on\n> isolation\n> > tests.\n>\n> I see, thanks for clarification. In the end one can check the catalog\n> directly of course, is there any other value in this function except for\n> debugging purposes?\n>\n\nI have no idea how it can be used for different purposes. Theoretically it\ncan be used to check if some variable was used (initialized) in a session\nalready. But for this purpose it is not too practical, and if there will be\nsome request for this functionality, then we can write a special function\nfor this purpose. But I don't know any actual use cases for this.\n\n\n> As a side note, I'm intended to go one more time through the first few\n> patches introducing the basic functionality, and then mark it as ready\n> in CF. I can't break the patch in testing since quite long time, and for\n> most parts the changes make sense to me.\n>\n\nThank you very much, for testing, comments, and all other work.\n\nI marked pg_session_variables function as PARALLEL RESTRICTED, and did\nrebase\n\nRegards\n\nPavel", "msg_date": "Sat, 18 Nov 2023 18:28:53 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nOn Tue, Oct 17, 2023 at 08:52:13AM +0200, Pavel Stehule wrote:\n>\n> When I thought about global temporary tables, I got one maybe interesting\n> idea. The one significant problem of global temporary tables is place for\n> storing info about size or column statistics.\n>\n> I think so these data can be stored simply in session variables. Any global\n> temporary table can get assigned one session variable, that can hold these\n> data.\n\nI don't know how realistic this would be. For instance it will require to\nproperly link the global temporary table life cycle with the session variable\nand I'm afraid it would require to add some hacks to make it work as needed.\n\nBut this still raises the question of whether this feature could be used\ninternally for the need of another feature. If we think it's likely, should we\ntry to act right now and reserve the \"pg_\" prefix for internal use rather than\ndo that a few years down the line and probably break some user code as it was\ndone recently for the role names?\n\n\n", "msg_date": "Wed, 22 Nov 2023 14:19:57 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "> On Sat, Nov 18, 2023 at 06:28:53PM +0100, Pavel Stehule wrote:\n> so 18. 11. 2023 v 15:54 odes�latel Dmitry Dolgov <9erthalion6@gmail.com>\n> napsal:\n> > As a side note, I'm intended to go one more time through the first few\n> > patches introducing the basic functionality, and then mark it as ready\n> > in CF. I can't break the patch in testing since quite long time, and for\n> > most parts the changes make sense to me.\n>\n> I marked pg_session_variables function as PARALLEL RESTRICTED, and did\n> rebase\n\nSo, after one week of uninterrupted evening reviews I've made it through\nthe first four patches :)\n\nIt's a decent job -- more than once, looking at the code, I thought I\ncould construct a case when it's going to blow up, but everything was\nworking just fine. Yet, I think the patch still has to be reshaped a bit\nbefore moving forward. I've got a couple proposals of different nature:\nhigh level changes (you probably won't like some of them, but I'm sure\nthey're going to be useful), technical code-level improvements/comments,\nand few language changes. With those changes in mind I would be\nsatisfied with the patch, and hopefully they would also make it easier\nfor a potential committer to pick it up.\n\n# High level proposals\n\n* I would suggest reducing the scope of the patch as much as possible,\n and not just by trimming on the edges, but rather following Phileas\n Fogg's example with the steamboat Henrietta -- get rid of all\n non-essential parts. This will make this rather large patch more\n approachable for others.\n\n For that one can concentrate only on the first two patches plus the\n fourth one (memory cleanup after dropping variables), leaving DISCARD,\n ON TRANSACTION END, DEFAULT, IMMUTABLE for the follow-up in the\n future.\n\n Another thing in this context would be to evaluate plpgsql support for\n this feature. You know the use case better than me, how important it\n is? Is it an intrinsic part of the feature, or session variables could\n be still valuable enough even without plpgsql? From what I see\n postponing plgpsql will make everything about ~800 lines lighter (most\n likely more), and also allow to ignore couple of concerns about the\n implementation (about this later).\n\n* The new GUC session_variables_ambiguity_warning is definitely going to\n cause many objections, it's another knob to manage very subtle\n behaviour detail very few people will ever notice. I see the point\n behind warning about ambiguity, so probably it makes sense to bite the\n bullet and decide one way or another. The proposal is to warn always\n in potentially ambiguous situations, and if concerns are high about\n logging too much, maybe do the warning on lower logging levels.\n\n# Code-level observations\n\n* It feels a bit awkward to have varid assignment logic in a separate\n function, what about adding an argument with varid to\n CreateVariableDestReceiver? SetVariableDestReceiverVarid still could\n be used for CreateDestReceiver.\n\n /*\n * Initially create a DestReceiver object.\n */\n DestReceiver *\n CreateVariableDestReceiver(void)\n\n /*\n * Set parameters for a VariableDestReceiver.\n * Should be called right after creating the DestReceiver.\n */\n void\n SetVariableDestReceiverVarid(DestReceiver *self, Oid varid)\n\n* It's worth it to add a commentary here explaining why it's fine to use\n InvalidOid here:\n\n if (pstmt->commandType != CMD_UTILITY)\n- ExplainOnePlan(pstmt, into, es, query_string, paramLI, queryEnv,\n+ ExplainOnePlan(pstmt, into, InvalidOid, es, query_string, paramLI, queryEnv,\n &planduration, (es->buffers ? &bufusage : NULL));\n\n My understanding is that since LetStmt is CMD_UTILITY, this branch\n will never be visited for a session variable.\n\n* IIUC this one is introduced to exclude session variables from the normal\n path with EXPR_KIND_UPDATE_TARGET:\n\n+ EXPR_KIND_ASSIGN_VARIABLE, /* PL/pgSQL assignment target - disallow\n+ * session variables */\n\n But the name doesn't sound right, maybe longer\n EXPR_KIND_UPDATE_TARGET_NO_VARS is better?\n\n* I'm curious about this one, which exactly part does this change cover?\n\n@@ -4888,21 +4914,43 @@ substitute_actual_parameters_mutator(Node *node,\n- if (param->paramkind != PARAM_EXTERN)\n+ if (param->paramkind != PARAM_EXTERN &&\n+ param->paramkind != PARAM_VARIABLE)\n elog(ERROR, \"unexpected paramkind: %d\", (int) param->paramkind);\n\n I've commented it out, but no tests were affected.\n\n* Does it mean there could be theoretically two LET statements at the\n same time with different command type, one CMD_UTILITY, one\n CMD_SELECT? Can it cause any issues?\n\n+ /*\n+ * Inside PL/pgSQL we don't want to execute LET statement as utility\n+ * command, because it disallow to execute expression as simple\n+ * expression. So for PL/pgSQL we have extra path, and we return SELECT.\n+ * Then it can be executed by exec_eval_expr. Result is dirrectly assigned\n+ * to target session variable inside PL/pgSQL LET statement handler. This\n+ * is extra code, extra path, but possibility to get faster execution is\n+ * too attractive.\n+ */\n+ if (stmt->plpgsql_mode)\n+ return query;\n+\n\n* This probably requires more explanation, is warning the only reason\n for this change?\n\n+ *\n+ * The session variables should not be used as target of PL/pgSQL assign\n+ * statement. So we should to use special parser expr kind, that disallow\n+ * usage of session variables. This block unwanted (in this context)\n+ * possible warning so target PL/pgSQL's variable shadows some session\n+ * variable.\n */\n target = transformExpr(pstate, (Node *) cref,\n- EXPR_KIND_UPDATE_TARGET);\n+ EXPR_KIND_ASSIGN_VARIABLE);\n\n* It would be great to have more commentaries here:\n\n\ttypedef struct\n\t{\n\t\tDestReceiver pub;\n\t\tOid varid;\n\t\tOid typid;\n\t\tint32 typmod;\n\t\tint typlen;\n\t\tint slot_offset;\n\t\tint rows;\n\t} SVariableState;\n\n For example, why does it make sense to have a field rows, where we\n interested to only know the fact that there is exactly one column?\n\n* Why there is SetSessionVariableWithSecurityCheck, but no\n GetSessionVariableWithSecurityCheck? Instead, object_aclcheck is done\n in standard_ExecutorStart, which looks a bit out of place.\n\n* pg_session_variables -- you mention it exists only for testing. What\n about moving it out into a separate patch for the sake of slimming\n down? It looks like it's used only in tests for \"memory cleanup\"\n patch, maybe they could be restructured to not require this function.\n\n* Probably it's time to drop unnecessary historical notes, like this:\n\n * Note: originally we enhanced a list xact_recheck_varids here. Unfortunately\n * it was not safe and a little bit too complex, because the sinval callback\n * function can be called when we iterate over xact_recheck_varids list.\n * Another issue was the possibility of being out of memory when we enhanced\n * the list. So now we just switch flag in related entry sessionvars hash table.\n * We need to iterate over hash table on every sinval message, so extra two\n * iteration over this hash table is not significant overhead (and we skip\n * entries that don't require recheck). Now we do not have any memory allocation\n * in the sinval handler (This note can be removed before commit).\n\n* The second patch \"Storage for session variables and SQL interface\",\n mentions DISCARD command:\n\n /*\n * There is no guarantee of sessionvars being initialized, even when\n * receiving an invalidation callback, as DISCARD [ ALL | VARIABLES ]\n * destroys the hash table entirely.\n */\n\n This command is implemented in another patch later one, so this\n comment probably belong there.\n\n* This comment mentions a \"direct access, without buffering\":\n\n\t/*\n\t * Direct access to session variable (without buffering). Because\n\t * returned value can be used (without an assignement) after the\n\t * referenced session variables is updated, we have to use an copy\n\t * of stored value every time.\n\t */\n\t*op->resvalue = GetSessionVariableWithTypeCheck(op->d.vparam.varid,\n\t\t\t\t\t\t\t\t\t\t\t\t\top->resnull,\n\t\t\t\t\t\t\t\t\t\t\t\t\top->d.vparam.vartype);\n\n But GetSessionVariableWithTypeCheck goes through get_session_variable\n and searches in the hash table. What \"buffering\" means in this\n context?\n\n* GetSessionVariableWithTypeCheck(Oid varid, bool *isNull, Oid expected_typid)\n\n Should the \"WithTypeCheck\" part be an argument of the\n GetSessionVariable? To reduce the code duplication a bit.\n\n* Just out of curiosity, why TopTransactionContext?\n\n\t/*\n\t * Store domain_check extra in TopTransactionContext. When we are in\n\t * other transaction, the domain_check_extra cache is not valid\n\t * anymore.\n\t */\n\tif (svar->domain_check_extra_lxid != MyProc->lxid)\n\t\tsvar->domain_check_extra = NULL;\n\n\tdomain_check(svar->value, svar->isnull,\n\t\t\t\t svar->typid, &svar->domain_check_extra,\n\t\t\t\t TopTransactionContext);\n\n* In SVariableData it would be great to have more comments around\n freeval, domain_check_extra, domain_check_extra_lxid.\n\n* Nitpicking, but the term \"shadowing\" for ambiguity between a session\n variable and a table column might be confusing, one can imagine there\n is a connection between those two objects and one actively follows\n (\"shadows\") the other one.\n\n* The second patch \"Storage for session variables and SQL interface\"\n mentions in the documentation default and temporary variables:\n\n <para>\n The value of a session variable is local to the current session. Retrieving\n a variable's value returns either a <literal>NULL</literal> or a default\n value, unless its value has been set to something else in the current\n session using the <command>LET</command> command. The content of a variable\n is not transactional. This is the same as regular variables in PL languages.\n The session variables can be persistent or can be temporary. In both cases,\n the content of session variables is temporary and not shared (like an\n content of temporary tables).\n </para>\n\n They're implemented in the following patches, so it belongs there.\n\n* Nitpicking, maybe merge those two conditions together for readability?\n\n if (!needs_validation)\n return;\n\n /*\n * Reset, this flag here, before we start the validation. It can be set to\n * on by incomming sinval message.\n */\n needs_validation = false;\n\n if (!sessionvars)\n return;\n\n* This one is not very clear, what is the difference between \"somewhere\n inside a transaction\" and \"at the end of a transaction\"?\n\n /*\n\t* This routine can be called somewhere inside transaction or at an transaction\n\t* end. When atEOX argument is false, then we are inside transaction, and we\n\t* don't want to throw entries related to session variables dropped in current\n\t* transaction.\n\t*/\n\n# Language topic\n\nSince this patch introduces a large body of documentation and\ncommentaries, I think it would benefit from a native speaker review.\nI've stumbled upon few examples (attached with proposed wording, without\na diff extension to not confuse the CF bot), but otherwise if anyone\nfollows this thread, texts review is appreciated.", "msg_date": "Sun, 26 Nov 2023 18:52:59 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nst 22. 11. 2023 v 7:20 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> Hi,\n>\n> On Tue, Oct 17, 2023 at 08:52:13AM +0200, Pavel Stehule wrote:\n> >\n> > When I thought about global temporary tables, I got one maybe interesting\n> > idea. The one significant problem of global temporary tables is place for\n> > storing info about size or column statistics.\n> >\n> > I think so these data can be stored simply in session variables. Any\n> global\n> > temporary table can get assigned one session variable, that can hold\n> these\n> > data.\n>\n> I don't know how realistic this would be. For instance it will require to\n> properly link the global temporary table life cycle with the session\n> variable\n> and I'm afraid it would require to add some hacks to make it work as\n> needed.\n>\n> But this still raises the question of whether this feature could be used\n> internally for the need of another feature. If we think it's likely,\n> should we\n> try to act right now and reserve the \"pg_\" prefix for internal use rather\n> than\n> do that a few years down the line and probably break some user code as it\n> was\n> done recently for the role names?\n>\n\nI don't think it is necessary. Session variables (in this design) are\njoined with schemas. If we use some session variables for system purposes,\nwe can use some dedicated schema. But when I think about it in detail,\nprobably my own dedicated storage (hash table in session memory) can be\nmuch better than session variables. What can be shared (maybe) is probably\nsinval message processing.\n\nHist 22. 11. 2023 v 7:20 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:Hi,\n\nOn Tue, Oct 17, 2023 at 08:52:13AM +0200, Pavel Stehule wrote:\n>\n> When I thought about global temporary tables, I got one maybe interesting\n> idea. The one significant problem of global temporary tables is place for\n> storing info about size or column statistics.\n>\n> I think so these data can be stored simply in session variables. Any global\n> temporary table can get assigned one session variable, that can hold these\n> data.\n\nI don't know how realistic this would be.  For instance it will require to\nproperly link the global temporary table life cycle with the session variable\nand I'm afraid it would require to add some hacks to make it work as needed.\n\nBut this still raises the question of whether this feature could be used\ninternally for the need of another feature.  If we think it's likely, should we\ntry to act right now and reserve the \"pg_\" prefix for internal use rather than\ndo that a few years down the line and probably break some user code as it was\ndone recently for the role names?I don't think it is necessary. Session variables (in this design) are joined with schemas. If we use some session variables for system purposes, we can use some dedicated schema. But when I think about it in detail, probably my own dedicated storage (hash table in session memory) can be much better than session variables. What can be shared (maybe) is probably sinval message processing.", "msg_date": "Sun, 26 Nov 2023 19:19:19 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nne 26. 11. 2023 v 18:56 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\nnapsal:\n\n> > On Sat, Nov 18, 2023 at 06:28:53PM +0100, Pavel Stehule wrote:\n> > so 18. 11. 2023 v 15:54 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\n> > napsal:\n> > > As a side note, I'm intended to go one more time through the first few\n> > > patches introducing the basic functionality, and then mark it as ready\n> > > in CF. I can't break the patch in testing since quite long time, and\n> for\n> > > most parts the changes make sense to me.\n> >\n> > I marked pg_session_variables function as PARALLEL RESTRICTED, and did\n> > rebase\n>\n> So, after one week of uninterrupted evening reviews I've made it through\n> the first four patches :)\n>\n> It's a decent job -- more than once, looking at the code, I thought I\n> could construct a case when it's going to blow up, but everything was\n> working just fine. Yet, I think the patch still has to be reshaped a bit\n> before moving forward. I've got a couple proposals of different nature:\n> high level changes (you probably won't like some of them, but I'm sure\n> they're going to be useful), technical code-level improvements/comments,\n> and few language changes. With those changes in mind I would be\n> satisfied with the patch, and hopefully they would also make it easier\n> for a potential committer to pick it up.\n>\n> # High level proposals\n>\n> * I would suggest reducing the scope of the patch as much as possible,\n> and not just by trimming on the edges, but rather following Phileas\n> Fogg's example with the steamboat Henrietta -- get rid of all\n> non-essential parts. This will make this rather large patch more\n> approachable for others.\n>\n> For that one can concentrate only on the first two patches plus the\n> fourth one (memory cleanup after dropping variables), leaving DISCARD,\n> ON TRANSACTION END, DEFAULT, IMMUTABLE for the follow-up in the\n> future.\n>\n> Another thing in this context would be to evaluate plpgsql support for\n> this feature. You know the use case better than me, how important it\n> is? Is it an intrinsic part of the feature, or session variables could\n> be still valuable enough even without plpgsql? From what I see\n> postponing plgpsql will make everything about ~800 lines lighter (most\n> likely more), and also allow to ignore couple of concerns about the\n> implementation (about this later).\n>\n> * The new GUC session_variables_ambiguity_warning is definitely going to\n> cause many objections, it's another knob to manage very subtle\n> behaviour detail very few people will ever notice. I see the point\n> behind warning about ambiguity, so probably it makes sense to bite the\n> bullet and decide one way or another. The proposal is to warn always\n> in potentially ambiguous situations, and if concerns are high about\n> logging too much, maybe do the warning on lower logging levels.\n>\n> # Code-level observations\n>\n> * It feels a bit awkward to have varid assignment logic in a separate\n> function, what about adding an argument with varid to\n> CreateVariableDestReceiver? SetVariableDestReceiverVarid still could\n> be used for CreateDestReceiver.\n>\n> /*\n> * Initially create a DestReceiver object.\n> */\n> DestReceiver *\n> CreateVariableDestReceiver(void)\n>\n> /*\n> * Set parameters for a VariableDestReceiver.\n> * Should be called right after creating the DestReceiver.\n> */\n> void\n> SetVariableDestReceiverVarid(DestReceiver *self, Oid varid)\n>\n> * It's worth it to add a commentary here explaining why it's fine to use\n> InvalidOid here:\n>\n> if (pstmt->commandType != CMD_UTILITY)\n> - ExplainOnePlan(pstmt, into, es, query_string, paramLI,\n> queryEnv,\n> + ExplainOnePlan(pstmt, into, InvalidOid, es, query_string,\n> paramLI, queryEnv,\n> &planduration, (es->buffers ? &bufusage :\n> NULL));\n>\n> My understanding is that since LetStmt is CMD_UTILITY, this branch\n> will never be visited for a session variable.\n>\n> * IIUC this one is introduced to exclude session variables from the normal\n> path with EXPR_KIND_UPDATE_TARGET:\n>\n> + EXPR_KIND_ASSIGN_VARIABLE, /* PL/pgSQL assignment target -\n> disallow\n> + * session\n> variables */\n>\n> But the name doesn't sound right, maybe longer\n> EXPR_KIND_UPDATE_TARGET_NO_VARS is better?\n>\n> * I'm curious about this one, which exactly part does this change cover?\n>\n> @@ -4888,21 +4914,43 @@ substitute_actual_parameters_mutator(Node *node,\n> - if (param->paramkind != PARAM_EXTERN)\n> + if (param->paramkind != PARAM_EXTERN &&\n> + param->paramkind != PARAM_VARIABLE)\n> elog(ERROR, \"unexpected paramkind: %d\", (int)\n> param->paramkind);\n>\n> I've commented it out, but no tests were affected.\n>\n> * Does it mean there could be theoretically two LET statements at the\n> same time with different command type, one CMD_UTILITY, one\n> CMD_SELECT? Can it cause any issues?\n>\n> + /*\n> + * Inside PL/pgSQL we don't want to execute LET statement as\n> utility\n> + * command, because it disallow to execute expression as simple\n> + * expression. So for PL/pgSQL we have extra path, and we return\n> SELECT.\n> + * Then it can be executed by exec_eval_expr. Result is dirrectly\n> assigned\n> + * to target session variable inside PL/pgSQL LET statement\n> handler. This\n> + * is extra code, extra path, but possibility to get faster\n> execution is\n> + * too attractive.\n> + */\n> + if (stmt->plpgsql_mode)\n> + return query;\n> +\n>\n> * This probably requires more explanation, is warning the only reason\n> for this change?\n>\n> + *\n> + * The session variables should not be used as target of PL/pgSQL\n> assign\n> + * statement. So we should to use special parser expr kind, that\n> disallow\n> + * usage of session variables. This block unwanted (in this\n> context)\n> + * possible warning so target PL/pgSQL's variable shadows some\n> session\n> + * variable.\n> */\n> target = transformExpr(pstate, (Node *) cref,\n> -\n> EXPR_KIND_UPDATE_TARGET);\n> +\n> EXPR_KIND_ASSIGN_VARIABLE);\n>\n> * It would be great to have more commentaries here:\n>\n> typedef struct\n> {\n> DestReceiver pub;\n> Oid varid;\n> Oid typid;\n> int32 typmod;\n> int typlen;\n> int slot_offset;\n> int rows;\n> } SVariableState;\n>\n> For example, why does it make sense to have a field rows, where we\n> interested to only know the fact that there is exactly one column?\n>\n> * Why there is SetSessionVariableWithSecurityCheck, but no\n> GetSessionVariableWithSecurityCheck? Instead, object_aclcheck is done\n> in standard_ExecutorStart, which looks a bit out of place.\n>\n> * pg_session_variables -- you mention it exists only for testing. What\n> about moving it out into a separate patch for the sake of slimming\n> down? It looks like it's used only in tests for \"memory cleanup\"\n> patch, maybe they could be restructured to not require this function.\n>\n> * Probably it's time to drop unnecessary historical notes, like this:\n>\n> * Note: originally we enhanced a list xact_recheck_varids here.\n> Unfortunately\n> * it was not safe and a little bit too complex, because the sinval\n> callback\n> * function can be called when we iterate over xact_recheck_varids list.\n> * Another issue was the possibility of being out of memory when we\n> enhanced\n> * the list. So now we just switch flag in related entry sessionvars hash\n> table.\n> * We need to iterate over hash table on every sinval message, so extra two\n> * iteration over this hash table is not significant overhead (and we skip\n> * entries that don't require recheck). Now we do not have any memory\n> allocation\n> * in the sinval handler (This note can be removed before commit).\n>\n> * The second patch \"Storage for session variables and SQL interface\",\n> mentions DISCARD command:\n>\n> /*\n> * There is no guarantee of sessionvars being initialized, even when\n> * receiving an invalidation callback, as DISCARD [ ALL | VARIABLES ]\n> * destroys the hash table entirely.\n> */\n>\n> This command is implemented in another patch later one, so this\n> comment probably belong there.\n>\n> * This comment mentions a \"direct access, without buffering\":\n>\n> /*\n> * Direct access to session variable (without buffering). Because\n> * returned value can be used (without an assignement) after the\n> * referenced session variables is updated, we have to use an copy\n> * of stored value every time.\n> */\n> *op->resvalue = GetSessionVariableWithTypeCheck(op->d.vparam.varid,\n>\n> op->resnull,\n>\n> op->d.vparam.vartype);\n>\n> But GetSessionVariableWithTypeCheck goes through get_session_variable\n> and searches in the hash table. What \"buffering\" means in this\n> context?\n>\n> * GetSessionVariableWithTypeCheck(Oid varid, bool *isNull, Oid\n> expected_typid)\n>\n> Should the \"WithTypeCheck\" part be an argument of the\n> GetSessionVariable? To reduce the code duplication a bit.\n>\n> * Just out of curiosity, why TopTransactionContext?\n>\n> /*\n> * Store domain_check extra in TopTransactionContext. When we are\n> in\n> * other transaction, the domain_check_extra cache is not valid\n> * anymore.\n> */\n> if (svar->domain_check_extra_lxid != MyProc->lxid)\n> svar->domain_check_extra = NULL;\n>\n> domain_check(svar->value, svar->isnull,\n> svar->typid, &svar->domain_check_extra,\n> TopTransactionContext);\n>\n> * In SVariableData it would be great to have more comments around\n> freeval, domain_check_extra, domain_check_extra_lxid.\n>\n> * Nitpicking, but the term \"shadowing\" for ambiguity between a session\n> variable and a table column might be confusing, one can imagine there\n> is a connection between those two objects and one actively follows\n> (\"shadows\") the other one.\n>\n> * The second patch \"Storage for session variables and SQL interface\"\n> mentions in the documentation default and temporary variables:\n>\n> <para>\n> The value of a session variable is local to the current session.\n> Retrieving\n> a variable's value returns either a <literal>NULL</literal> or a\n> default\n> value, unless its value has been set to something else in the current\n> session using the <command>LET</command> command. The content of a\n> variable\n> is not transactional. This is the same as regular variables in PL\n> languages.\n> The session variables can be persistent or can be temporary. In both\n> cases,\n> the content of session variables is temporary and not shared (like an\n> content of temporary tables).\n> </para>\n>\n> They're implemented in the following patches, so it belongs there.\n>\n> * Nitpicking, maybe merge those two conditions together for readability?\n>\n> if (!needs_validation)\n> return;\n>\n> /*\n> * Reset, this flag here, before we start the validation. It can be\n> set to\n> * on by incomming sinval message.\n> */\n> needs_validation = false;\n>\n> if (!sessionvars)\n> return;\n>\n> * This one is not very clear, what is the difference between \"somewhere\n> inside a transaction\" and \"at the end of a transaction\"?\n>\n> /*\n> * This routine can be called somewhere inside transaction or at an\n> transaction\n> * end. When atEOX argument is false, then we are inside\n> transaction, and we\n> * don't want to throw entries related to session variables dropped\n> in current\n> * transaction.\n> */\n>\n> # Language topic\n>\n> Since this patch introduces a large body of documentation and\n> commentaries, I think it would benefit from a native speaker review.\n> I've stumbled upon few examples (attached with proposed wording, without\n> a diff extension to not confuse the CF bot), but otherwise if anyone\n> follows this thread, texts review is appreciated.\n>\n\nThank you for your review. Next two weeks I'll not too much time to work\non this patch - I have to work on some commercial work, and the week is\nPrague PgConf, so my reply will be slow. But after these events I'll\nconcentrate on this patch.\n\nRegards\n\nPavel\n\nHine 26. 11. 2023 v 18:56 odesílatel Dmitry Dolgov <9erthalion6@gmail.com> napsal:> On Sat, Nov 18, 2023 at 06:28:53PM +0100, Pavel Stehule wrote:\n> so 18. 11. 2023 v 15:54 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\n> napsal:\n> > As a side note, I'm intended to go one more time through the first few\n> > patches introducing the basic functionality, and then mark it as ready\n> > in CF. I can't break the patch in testing since quite long time, and for\n> > most parts the changes make sense to me.\n>\n> I marked pg_session_variables function as PARALLEL RESTRICTED, and did\n> rebase\n\nSo, after one week of uninterrupted evening reviews I've made it through\nthe first four patches :)\n\nIt's a decent job -- more than once, looking at the code, I thought I\ncould construct a case when it's going to blow up, but everything was\nworking just fine. Yet, I think the patch still has to be reshaped a bit\nbefore moving forward. I've got a couple proposals of different nature:\nhigh level changes (you probably won't like some of them, but I'm sure\nthey're going to be useful), technical code-level improvements/comments,\nand few language changes. With those changes in mind I would be\nsatisfied with the patch, and hopefully they would also make it easier\nfor a potential committer to pick it up.\n\n# High level proposals\n\n* I would suggest reducing the scope of the patch as much as possible,\n  and not just by trimming on the edges, but rather following Phileas\n  Fogg's example with the steamboat Henrietta -- get rid of all\n  non-essential parts. This will make this rather large patch more\n  approachable for others.\n\n  For that one can concentrate only on the first two patches plus the\n  fourth one (memory cleanup after dropping variables), leaving DISCARD,\n  ON TRANSACTION END, DEFAULT, IMMUTABLE for the follow-up in the\n  future.\n\n  Another thing in this context would be to evaluate plpgsql support for\n  this feature. You know the use case better than me, how important it\n  is? Is it an intrinsic part of the feature, or session variables could\n  be still valuable enough even without plpgsql? From what I see\n  postponing plgpsql will make everything about ~800 lines lighter (most\n  likely more), and also allow to ignore couple of concerns about the\n  implementation (about this later).\n\n* The new GUC session_variables_ambiguity_warning is definitely going to\n  cause many objections, it's another knob to manage very subtle\n  behaviour detail very few people will ever notice. I see the point\n  behind warning about ambiguity, so probably it makes sense to bite the\n  bullet and decide one way or another. The proposal is to warn always\n  in potentially ambiguous situations, and if concerns are high about\n  logging too much, maybe do the warning on lower logging levels.\n\n# Code-level observations\n\n* It feels a bit awkward to have varid assignment logic in a separate\n  function, what about adding an argument with varid to\n  CreateVariableDestReceiver? SetVariableDestReceiverVarid still could\n  be used for CreateDestReceiver.\n\n    /*\n     * Initially create a DestReceiver object.\n     */\n    DestReceiver *\n    CreateVariableDestReceiver(void)\n\n    /*\n     * Set parameters for a VariableDestReceiver.\n     * Should be called right after creating the DestReceiver.\n     */\n    void\n    SetVariableDestReceiverVarid(DestReceiver *self, Oid varid)\n\n* It's worth it to add a commentary here explaining why it's fine to use\n  InvalidOid here:\n\n     if (pstmt->commandType != CMD_UTILITY)\n-           ExplainOnePlan(pstmt, into, es, query_string, paramLI, queryEnv,\n+           ExplainOnePlan(pstmt, into, InvalidOid, es, query_string, paramLI, queryEnv,\n                           &planduration, (es->buffers ? &bufusage : NULL));\n\n  My understanding is that since LetStmt is CMD_UTILITY, this branch\n  will never be visited for a session variable.\n\n* IIUC this one is introduced to exclude session variables from the normal\n  path with EXPR_KIND_UPDATE_TARGET:\n\n+   EXPR_KIND_ASSIGN_VARIABLE,      /* PL/pgSQL assignment target - disallow\n+                                                            * session variables */\n\n  But the name doesn't sound right, maybe longer\n  EXPR_KIND_UPDATE_TARGET_NO_VARS is better?\n\n* I'm curious about this one, which exactly part does this change cover?\n\n@@ -4888,21 +4914,43 @@ substitute_actual_parameters_mutator(Node *node,\n-               if (param->paramkind != PARAM_EXTERN)\n+               if (param->paramkind != PARAM_EXTERN &&\n+                       param->paramkind != PARAM_VARIABLE)\n                        elog(ERROR, \"unexpected paramkind: %d\", (int) param->paramkind);\n\n  I've commented it out, but no tests were affected.\n\n* Does it mean there could be theoretically two LET statements at the\n  same time with different command type, one CMD_UTILITY, one\n  CMD_SELECT? Can it cause any issues?\n\n+       /*\n+        * Inside PL/pgSQL we don't want to execute LET statement as utility\n+        * command, because it disallow to execute expression as simple\n+        * expression. So for PL/pgSQL we have extra path, and we return SELECT.\n+        * Then it can be executed by exec_eval_expr. Result is dirrectly assigned\n+        * to target session variable inside PL/pgSQL LET statement handler. This\n+        * is extra code, extra path, but possibility to get faster execution is\n+        * too attractive.\n+        */\n+       if (stmt->plpgsql_mode)\n+               return query;\n+\n\n* This probably requires more explanation, is warning the only reason\n  for this change?\n\n+        *\n+        * The session variables should not be used as target of PL/pgSQL assign\n+        * statement. So we should to use special parser expr kind, that disallow\n+        * usage of session variables. This block unwanted (in this context)\n+        * possible warning so target PL/pgSQL's variable shadows some session\n+        * variable.\n         */\n        target = transformExpr(pstate, (Node *) cref,\n-                                                  EXPR_KIND_UPDATE_TARGET);\n+                                                  EXPR_KIND_ASSIGN_VARIABLE);\n\n* It would be great to have more commentaries here:\n\n        typedef struct\n        {\n                DestReceiver pub;\n                Oid            varid;\n                Oid            typid;\n                int32        typmod;\n                int            typlen;\n                int            slot_offset;\n                int            rows;\n        } SVariableState;\n\n  For example, why does it make sense to have a field rows, where we\n  interested to only know the fact that there is exactly one column?\n\n* Why there is SetSessionVariableWithSecurityCheck, but no\n  GetSessionVariableWithSecurityCheck? Instead, object_aclcheck is done\n  in standard_ExecutorStart, which looks a bit out of place.\n\n* pg_session_variables -- you mention it exists only for testing. What\n  about moving it out into a separate patch for the sake of slimming\n  down? It looks like it's used only in tests for \"memory cleanup\"\n  patch, maybe they could be restructured to not require this function.\n\n* Probably it's time to drop unnecessary historical notes, like this:\n\n * Note: originally we enhanced a list xact_recheck_varids here. Unfortunately\n * it was not safe and a little bit too complex, because the sinval callback\n * function can be called when we iterate over xact_recheck_varids list.\n * Another issue was the possibility of being out of memory when we enhanced\n * the list. So now we just switch flag in related entry sessionvars hash table.\n * We need to iterate over hash table on every sinval message, so extra two\n * iteration over this hash table is not significant overhead (and we skip\n * entries that don't require recheck). Now we do not have any memory allocation\n * in the sinval handler (This note can be removed before commit).\n\n* The second patch \"Storage for session variables and SQL interface\",\n  mentions DISCARD command:\n\n    /*\n     * There is no guarantee of sessionvars being initialized, even when\n     * receiving an invalidation callback, as DISCARD [ ALL | VARIABLES ]\n     * destroys the hash table entirely.\n     */\n\n  This command is implemented in another patch later one, so this\n  comment probably belong there.\n\n* This comment mentions a \"direct access, without buffering\":\n\n        /*\n         * Direct access to session variable (without buffering). Because\n         * returned value can be used (without an assignement) after the\n         * referenced session variables is updated, we have to use an copy\n         * of stored value every time.\n         */\n        *op->resvalue = GetSessionVariableWithTypeCheck(op->d.vparam.varid,\n                                                                                                        op->resnull,\n                                                                                                        op->d.vparam.vartype);\n\n  But GetSessionVariableWithTypeCheck goes through get_session_variable\n  and searches in the hash table. What \"buffering\" means in this\n  context?\n\n* GetSessionVariableWithTypeCheck(Oid varid, bool *isNull, Oid expected_typid)\n\n  Should the \"WithTypeCheck\" part be an argument of the\n  GetSessionVariable? To reduce the code duplication a bit.\n\n* Just out of curiosity, why TopTransactionContext?\n\n        /*\n         * Store domain_check extra in TopTransactionContext. When we are in\n         * other transaction, the domain_check_extra cache is not valid\n         * anymore.\n         */\n        if (svar->domain_check_extra_lxid != MyProc->lxid)\n                svar->domain_check_extra = NULL;\n\n        domain_check(svar->value, svar->isnull,\n                                 svar->typid, &svar->domain_check_extra,\n                                 TopTransactionContext);\n\n* In SVariableData it would be great to have more comments around\n  freeval, domain_check_extra, domain_check_extra_lxid.\n\n* Nitpicking, but the term \"shadowing\" for ambiguity between a session\n  variable and a table column might be confusing, one can imagine there\n  is a connection between those two objects and one actively follows\n  (\"shadows\") the other one.\n\n* The second patch \"Storage for session variables and SQL interface\"\n  mentions in the documentation default and temporary variables:\n\n   <para>\n    The value of a session variable is local to the current session. Retrieving\n    a variable's value returns either a <literal>NULL</literal> or a default\n    value, unless its value has been set to something else in the current\n    session using the <command>LET</command> command. The content of a variable\n    is not transactional. This is the same as regular variables in PL languages.\n    The session variables can be persistent or can be temporary. In both cases,\n    the content of session variables is temporary and not shared (like an\n    content of temporary tables).\n   </para>\n\n  They're implemented in the following patches, so it belongs there.\n\n* Nitpicking, maybe merge those two conditions together for readability?\n\n    if (!needs_validation)\n        return;\n\n    /*\n     * Reset, this flag here, before we start the validation. It can be set to\n     * on by incomming sinval message.\n     */\n    needs_validation = false;\n\n    if (!sessionvars)\n        return;\n\n* This one is not very clear, what is the difference between \"somewhere\n  inside a transaction\" and \"at the end of a transaction\"?\n\n   /*\n        * This routine can be called somewhere inside transaction or at an transaction\n        * end. When atEOX argument is false, then we are inside transaction, and we\n        * don't want to throw entries related to session variables dropped in current\n        * transaction.\n        */\n\n# Language topic\n\nSince this patch introduces a large body of documentation and\ncommentaries, I think it would benefit from a native speaker review.\nI've stumbled upon few examples (attached with proposed wording, without\na diff extension to not confuse the CF bot), but otherwise if anyone\nfollows this thread, texts review is appreciated.Thank you for your review.  Next two weeks I'll not too much time to work on this patch - I have to work on some commercial work, and the week is Prague PgConf, so my reply will be slow. But after these events I'll concentrate on this patch. RegardsPavel", "msg_date": "Sun, 3 Dec 2023 06:04:12 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "> On Sun, Dec 03, 2023 at 06:04:12AM +0100, Pavel Stehule wrote:\n>\n> Thank you for your review. Next two weeks I'll not too much time to work\n> on this patch - I have to work on some commercial work, and the week is\n> Prague PgConf, so my reply will be slow. But after these events I'll\n> concentrate on this patch.\n\nNo worries, it's fine. Have fun at PGConf!\n\n\n", "msg_date": "Sun, 3 Dec 2023 15:14:44 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nonly rebase\n\nRegards\n\nPavel", "msg_date": "Sat, 9 Dec 2023 18:59:54 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nonly rebase\n\nRegards\n\nPavel", "msg_date": "Wed, 20 Dec 2023 09:01:21 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\n* rebase\n* applying changes from language.txt patch\n\nRegards\n\nPavel", "msg_date": "Fri, 29 Dec 2023 20:15:50 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nnew update. I separated functionality to more patches.\n\nRegards\n\nPavel", "msg_date": "Sat, 20 Jan 2024 21:26:40 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nso 20. 1. 2024 v 21:26 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> new update. I separated functionality to more patches.\n>\n\nwith new macro for generating syscache\n\n\n\n>\n> Regards\n>\n> Pavel\n>", "msg_date": "Tue, 23 Jan 2024 22:10:22 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nI found few unwanted empty lines, so I fixed it\n\nRegards\n\nPavel", "msg_date": "Wed, 24 Jan 2024 19:59:11 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nrebase\n\nRegards\n\nPavel", "msg_date": "Thu, 25 Jan 2024 05:58:19 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Thanks for the update, smaller patches looks promising.\n\nOff the list Pavel has mentioned that the first two patches contain a\nbare minimum for session variables, so I've reviewed them once more and\nsuggest to concentrate on them first. I'm afraid the memory cleanup\npatch has to be added to the \"bare minimum\" set as well -- otherwise in\nmy tests it was too easy to run out of memory via creating, assigning\nand dropping variables. Unfortunately one can't extract those three\npatches from the series and apply only them, the memory patch would have\nsome conflicts. Can you maybe reshuffle the series to have those patches\n(1, 2 + 8) as first three?\n\nIf that's possible, my proposal would be to proceed with them first. To the\nbest of my knowledge they look good to me, except few minor details:\n\n* The documentation says in a couple of places (ddl.sgml,\n create_variable.sgml) that \"Retrieving a session variable's value\n returns either a NULL or a default value\", but as far as I see the\n default value feature is not implemented within first two patches.\n\n* Similar with mentioning immutable session variables in plpgsql.sgml .\n\n* Commentary to LookupVariable mentions a rowtype_only argument:\n\n\t+/*\n\t+ * Returns oid of session variable specified by possibly qualified identifier.\n\t+ *\n\t+ * If not found, returns InvalidOid if missing_ok, else throws error.\n\t+ * When rowtype_only argument is true the session variables of not\n\t+ * composite types are ignored. This should to reduce possible collisions.\n\t+ */\n\t+Oid\n\t+LookupVariable(const char *nspname,\n\t+ const char *varname,\n\t+ bool missing_ok)\n\n but the function doesn't have it.\n\n* I've noticed an interesting result when a LET statement is used to assign a\n value without a subquery:\n\n\tcreate variable test as text;\n\t-- returns NULL\n\tselect test;\n\n\t-- use repeat directly without a subquery\n\tlet test = repeat(\"test\", 100000);\n\n\t-- returns NULL\n\tselect test;\n\n I was expecting to see an error here, is this a correct behaviour?\n\n\n", "msg_date": "Sun, 28 Jan 2024 19:00:35 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "ne 28. 1. 2024 v 19:00 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\nnapsal:\n\n> Thanks for the update, smaller patches looks promising.\n>\n> Off the list Pavel has mentioned that the first two patches contain a\n> bare minimum for session variables, so I've reviewed them once more and\n> suggest to concentrate on them first. I'm afraid the memory cleanup\n> patch has to be added to the \"bare minimum\" set as well -- otherwise in\n> my tests it was too easy to run out of memory via creating, assigning\n> and dropping variables. Unfortunately one can't extract those three\n> patches from the series and apply only them, the memory patch would have\n> some conflicts. Can you maybe reshuffle the series to have those patches\n> (1, 2 + 8) as first three?\n>\n> If that's possible, my proposal would be to proceed with them first. To the\n> best of my knowledge they look good to me, except few minor details:\n>\n> * The documentation says in a couple of places (ddl.sgml,\n> create_variable.sgml) that \"Retrieving a session variable's value\n> returns either a NULL or a default value\", but as far as I see the\n> default value feature is not implemented within first two patches.\n>\n> * Similar with mentioning immutable session variables in plpgsql.sgml .\n>\n> * Commentary to LookupVariable mentions a rowtype_only argument:\n>\n> +/*\n> + * Returns oid of session variable specified by possibly\n> qualified identifier.\n> + *\n> + * If not found, returns InvalidOid if missing_ok, else throws\n> error.\n> + * When rowtype_only argument is true the session variables of not\n> + * composite types are ignored. This should to reduce possible\n> collisions.\n> + */\n> +Oid\n> +LookupVariable(const char *nspname,\n> + const char *varname,\n> + bool missing_ok)\n>\n> but the function doesn't have it.\n>\n> * I've noticed an interesting result when a LET statement is used to\n> assign a\n> value without a subquery:\n>\n> create variable test as text;\n> -- returns NULL\n> select test;\n>\n> -- use repeat directly without a subquery\n> let test = repeat(\"test\", 100000);\n>\n> -- returns NULL\n> select test;\n>\n> I was expecting to see an error here, is this a correct behaviour?\n>\n\nwhat is strange on this result?\n\n(2024-01-28 20:32:05) postgres=# let test = 'ab';\nLET\n(2024-01-28 20:32:12) postgres=# let test = repeat(\"test\", 10);\nLET\n(2024-01-28 20:32:19) postgres=# select test;\n┌──────────────────────┐\n│ test │\n╞══════════════════════╡\n│ abababababababababab │\n└──────────────────────┘\n(1 row)\n\n(2024-01-28 20:32:21) postgres=# let test = null;\nLET\n(2024-01-28 20:32:48) postgres=# let test = repeat(\"test\", 10);\nLET\n(2024-01-28 20:32:51) postgres=# select test;\n┌──────┐\n│ test │\n╞══════╡\n│ ∅ │\n└──────┘\n(1 row)\n\n(2024-01-28 20:32:53) postgres=# select repeat(test, 10);\n┌────────┐\n│ repeat │\n╞════════╡\n│ ∅ │\n└────────┘\n(1 row)\n\n\"repeat\" is the usual scalar function. Maybe you thought different function\n\nne 28. 1. 2024 v 19:00 odesílatel Dmitry Dolgov <9erthalion6@gmail.com> napsal:Thanks for the update, smaller patches looks promising.\n\nOff the list Pavel has mentioned that the first two patches contain a\nbare minimum for session variables, so I've reviewed them once more and\nsuggest to concentrate on them first. I'm afraid the memory cleanup\npatch has to be added to the \"bare minimum\" set as well -- otherwise in\nmy tests it was too easy to run out of memory via creating, assigning\nand dropping variables. Unfortunately one can't extract those three\npatches from the series and apply only them, the memory patch would have\nsome conflicts. Can you maybe reshuffle the series to have those patches\n(1, 2 + 8) as first three?\n\nIf that's possible, my proposal would be to proceed with them first. To the\nbest of my knowledge they look good to me, except few minor details:\n\n* The documentation says in a couple of places (ddl.sgml,\n  create_variable.sgml) that \"Retrieving a session variable's value\n  returns either a NULL or a default value\", but as far as I see the\n  default value feature is not implemented within first two patches.\n\n* Similar with mentioning immutable session variables in plpgsql.sgml .\n\n* Commentary to LookupVariable mentions a rowtype_only argument:\n\n        +/*\n        + * Returns oid of session variable specified by possibly qualified identifier.\n        + *\n        + * If not found, returns InvalidOid if missing_ok, else throws error.\n        + * When rowtype_only argument is true the session variables of not\n        + * composite types are ignored. This should to reduce possible collisions.\n        + */\n        +Oid\n        +LookupVariable(const char *nspname,\n        +                          const char *varname,\n        +                          bool missing_ok)\n\n  but the function doesn't have it.\n\n* I've noticed an interesting result when a LET statement is used to assign a\n  value without a subquery:\n\n        create variable test as text;\n        -- returns NULL\n        select test;\n\n        -- use repeat directly without a subquery\n        let test = repeat(\"test\", 100000);\n\n        -- returns NULL\n        select test;\n\n  I was expecting to see an error here, is this a correct behaviour?what is strange on this result?(2024-01-28 20:32:05) postgres=# let test = 'ab';LET(2024-01-28 20:32:12) postgres=# let test = repeat(\"test\", 10);LET(2024-01-28 20:32:19) postgres=# select test;┌──────────────────────┐│         test         │╞══════════════════════╡│ abababababababababab │└──────────────────────┘(1 row)(2024-01-28 20:32:21) postgres=# let test = null;LET(2024-01-28 20:32:48) postgres=# let test = repeat(\"test\", 10);LET(2024-01-28 20:32:51) postgres=# select test;┌──────┐│ test │╞══════╡│ ∅    │└──────┘(1 row)(2024-01-28 20:32:53) postgres=# select repeat(test, 10);┌────────┐│ repeat │╞════════╡│ ∅      │└────────┘(1 row)\"repeat\" is the usual scalar function. Maybe you thought different function", "msg_date": "Sun, 28 Jan 2024 20:34:40 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "> On Sun, Jan 28, 2024 at 08:34:40PM +0100, Pavel Stehule wrote:\n> > * I've noticed an interesting result when a LET statement is used to\n> > assign a\n> > value without a subquery:\n> >\n> > create variable test as text;\n> > -- returns NULL\n> > select test;\n> >\n> > -- use repeat directly without a subquery\n> > let test = repeat(\"test\", 100000);\n> >\n> > -- returns NULL\n> > select test;\n> >\n> > I was expecting to see an error here, is this a correct behaviour?\n> >\n>\n> what is strange on this result?\n\nNever mind, I've got confused about the quotes here -- it was referring\nto the variable content, not a string.\n\n\n", "msg_date": "Sun, 28 Jan 2024 21:09:05 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nne 28. 1. 2024 v 19:00 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\nnapsal:\n\n> Thanks for the update, smaller patches looks promising.\n>\n> Off the list Pavel has mentioned that the first two patches contain a\n> bare minimum for session variables, so I've reviewed them once more and\n> suggest to concentrate on them first. I'm afraid the memory cleanup\n> patch has to be added to the \"bare minimum\" set as well -- otherwise in\n> my tests it was too easy to run out of memory via creating, assigning\n> and dropping variables. Unfortunately one can't extract those three\n> patches from the series and apply only them, the memory patch would have\n> some conflicts. Can you maybe reshuffle the series to have those patches\n> (1, 2 + 8) as first three?\n>\n\nprobably you need too\n\n0006-function-pg_session_variables-for-cleaning-tests.patch and\n0007-DISCARD-VARIABLES.patch\n\n6 is necessary for testing of cleaning\n\n\n> If that's possible, my proposal would be to proceed with them first. To the\n> best of my knowledge they look good to me, except few minor details:\n>\n> * The documentation says in a couple of places (ddl.sgml,\n> create_variable.sgml) that \"Retrieving a session variable's value\n> returns either a NULL or a default value\", but as far as I see the\n> default value feature is not implemented within first two patches.\n>\n\nshould be fixed\n\n\n>\n> * Similar with mentioning immutable session variables in plpgsql.sgml .\n>\n\nfixed\n\n\n>\n> * Commentary to LookupVariable mentions a rowtype_only argument:\n>\n> +/*\n> + * Returns oid of session variable specified by possibly\n> qualified identifier.\n> + *\n> + * If not found, returns InvalidOid if missing_ok, else throws\n> error.\n> + * When rowtype_only argument is true the session variables of not\n> + * composite types are ignored. This should to reduce possible\n> collisions.\n> + */\n> +Oid\n> +LookupVariable(const char *nspname,\n> + const char *varname,\n> + bool missing_ok)\n>\n> but the function doesn't have it.\n>\n\nremoved\n\nRegards\n\nPavel\n\n\n\n>\n> * I've noticed an interesting result when a LET statement is used to\n> assign a\n> value without a subquery:\n>\n> create variable test as text;\n> -- returns NULL\n> select test;\n>\n> -- use repeat directly without a subquery\n> let test = repeat(\"test\", 100000);\n>\n> -- returns NULL\n> select test;\n>\n> I was expecting to see an error here, is this a correct behaviour?\n>", "msg_date": "Mon, 29 Jan 2024 08:57:42 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "> On Mon, Jan 29, 2024 at 08:57:42AM +0100, Pavel Stehule wrote:\n> Hi\n>\n> ne 28. 1. 2024 v 19:00 odes�latel Dmitry Dolgov <9erthalion6@gmail.com>\n> napsal:\n>\n> > Thanks for the update, smaller patches looks promising.\n> >\n> > Off the list Pavel has mentioned that the first two patches contain a\n> > bare minimum for session variables, so I've reviewed them once more and\n> > suggest to concentrate on them first. I'm afraid the memory cleanup\n> > patch has to be added to the \"bare minimum\" set as well -- otherwise in\n> > my tests it was too easy to run out of memory via creating, assigning\n> > and dropping variables. Unfortunately one can't extract those three\n> > patches from the series and apply only them, the memory patch would have\n> > some conflicts. Can you maybe reshuffle the series to have those patches\n> > (1, 2 + 8) as first three?\n> >\n>\n> probably you need too\n>\n> 0006-function-pg_session_variables-for-cleaning-tests.patch and\n> 0007-DISCARD-VARIABLES.patch\n>\n> 6 is necessary for testing of cleaning\n\nOk, let me take a look at those. Unless there are any objections, my\nplan would be to give it a final check and mark the CF item as ready for\ncommitter -- meaning the first 5 patches.\n\n\n", "msg_date": "Mon, 29 Jan 2024 19:35:52 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "po 29. 1. 2024 v 19:36 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\nnapsal:\n\n> > On Mon, Jan 29, 2024 at 08:57:42AM +0100, Pavel Stehule wrote:\n> > Hi\n> >\n> > ne 28. 1. 2024 v 19:00 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\n> > napsal:\n> >\n> > > Thanks for the update, smaller patches looks promising.\n> > >\n> > > Off the list Pavel has mentioned that the first two patches contain a\n> > > bare minimum for session variables, so I've reviewed them once more and\n> > > suggest to concentrate on them first. I'm afraid the memory cleanup\n> > > patch has to be added to the \"bare minimum\" set as well -- otherwise in\n> > > my tests it was too easy to run out of memory via creating, assigning\n> > > and dropping variables. Unfortunately one can't extract those three\n> > > patches from the series and apply only them, the memory patch would\n> have\n> > > some conflicts. Can you maybe reshuffle the series to have those\n> patches\n> > > (1, 2 + 8) as first three?\n> > >\n> >\n> > probably you need too\n> >\n> > 0006-function-pg_session_variables-for-cleaning-tests.patch and\n> > 0007-DISCARD-VARIABLES.patch\n> >\n> > 6 is necessary for testing of cleaning\n>\n> Ok, let me take a look at those. Unless there are any objections, my\n> plan would be to give it a final check and mark the CF item as ready for\n> committer -- meaning the first 5 patches.\n>\n\nsure.\n\nThank you very much.\n\nPavel\n\npo 29. 1. 2024 v 19:36 odesílatel Dmitry Dolgov <9erthalion6@gmail.com> napsal:> On Mon, Jan 29, 2024 at 08:57:42AM +0100, Pavel Stehule wrote:\n> Hi\n>\n> ne 28. 1. 2024 v 19:00 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\n> napsal:\n>\n> > Thanks for the update, smaller patches looks promising.\n> >\n> > Off the list Pavel has mentioned that the first two patches contain a\n> > bare minimum for session variables, so I've reviewed them once more and\n> > suggest to concentrate on them first. I'm afraid the memory cleanup\n> > patch has to be added to the \"bare minimum\" set as well -- otherwise in\n> > my tests it was too easy to run out of memory via creating, assigning\n> > and dropping variables. Unfortunately one can't extract those three\n> > patches from the series and apply only them, the memory patch would have\n> > some conflicts. Can you maybe reshuffle the series to have those patches\n> > (1, 2 + 8) as first three?\n> >\n>\n> probably you need too\n>\n> 0006-function-pg_session_variables-for-cleaning-tests.patch and\n> 0007-DISCARD-VARIABLES.patch\n>\n> 6 is necessary for testing of cleaning\n\nOk, let me take a look at those. Unless there are any objections, my\nplan would be to give it a final check and mark the CF item as ready for\ncommitter -- meaning the first 5 patches.sure.Thank you very much. Pavel", "msg_date": "Mon, 29 Jan 2024 19:46:01 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nfresh (and only) rebase\n\nRegards\n\nPavel", "msg_date": "Tue, 30 Jan 2024 07:26:41 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Yep, in this constellation the implementation holds much better (in\nterms of memory) in my create/let/drop testing.\n\nI've marked the CF item as ready for committer, but a note for anyone\nwho would like to pick up it from here -- we're talking about first 5\npatches here, up to the memory cleaning after DROP VARIABLE. It doesn't\nmean the rest is somehow not worth it, but I believe it's a good first\nstep.\n\n\n", "msg_date": "Tue, 30 Jan 2024 20:14:49 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "út 30. 1. 2024 v 20:15 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\nnapsal:\n\n> Yep, in this constellation the implementation holds much better (in\n> terms of memory) in my create/let/drop testing.\n>\n> I've marked the CF item as ready for committer, but a note for anyone\n> who would like to pick up it from here -- we're talking about first 5\n> patches here, up to the memory cleaning after DROP VARIABLE. It doesn't\n> mean the rest is somehow not worth it, but I believe it's a good first\n> step.\n>\n\nThank you very much\n\nPavel\n\nút 30. 1. 2024 v 20:15 odesílatel Dmitry Dolgov <9erthalion6@gmail.com> napsal:Yep, in this constellation the implementation holds much better (in\nterms of memory) in my create/let/drop testing.\n\nI've marked the CF item as ready for committer, but a note for anyone\nwho would like to pick up it from here -- we're talking about first 5\npatches here, up to the memory cleaning after DROP VARIABLE. It doesn't\nmean the rest is somehow not worth it, but I believe it's a good first\nstep.Thank you very muchPavel", "msg_date": "Tue, 30 Jan 2024 20:23:50 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi,\n\nhere is new rebase of this patch set.\n\nYears ago I promised to implement support for transactional behaviour. I\nwrote it in patch 0019. It is based on my patch from 2020 but the memory\ncleaning is more readable and I believe it is correct. All other patches\nare without touching. The first five patches are of \"should to have\" type,\nall others (with new one) are \"nice to have\" type (although support for\nsimply expr evaluation or parallel execution has strong benefits).\n\nRegards\n\nPavel", "msg_date": "Tue, 20 Feb 2024 20:29:29 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nút 20. 2. 2024 v 20:29 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi,\n>\n> here is new rebase of this patch set.\n>\n> Years ago I promised to implement support for transactional behaviour. I\n> wrote it in patch 0019. It is based on my patch from 2020 but the memory\n> cleaning is more readable and I believe it is correct. All other patches\n> are without touching. The first five patches are of \"should to have\" type,\n> all others (with new one) are \"nice to have\" type (although support for\n> simply expr evaluation or parallel execution has strong benefits).\n>\n\nfresh rebase\n\n\n>\n> Regards\n>\n> Pavel\n>", "msg_date": "Tue, 27 Feb 2024 12:53:43 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nonly rebase", "msg_date": "Wed, 28 Feb 2024 05:18:21 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nonly rebase\n\nRegards\n\nPavel", "msg_date": "Mon, 4 Mar 2024 07:05:15 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nonly rebase\n\nRegards\n\nPavel", "msg_date": "Tue, 5 Mar 2024 08:26:57 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nonly rebase\n\nRegards\n\nPavel", "msg_date": "Mon, 11 Mar 2024 07:21:11 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nonly rebase\n\nRegards\n\nPavel", "msg_date": "Fri, 15 Mar 2024 06:38:46 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nnew fresh rebase\n\nRegards\n\nPavel", "msg_date": "Mon, 18 Mar 2024 08:08:54 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nfresh rebase\n\nregards\n\nPavel", "msg_date": "Wed, 20 Mar 2024 09:44:56 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\ntoday second update\n\nfix warning\n\nRegards\n\nPavel", "msg_date": "Wed, 20 Mar 2024 18:58:08 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nfresh rebase\n\nrename function DropVariable to DropVariableById, to be name consistent\nwith other Dropxxx routines\n\nRegards\n\nPavel", "msg_date": "Tue, 26 Mar 2024 20:24:02 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nonly rebase\n\nRegards\n\nPavel", "msg_date": "Sun, 31 Mar 2024 08:13:44 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nonly rebase\n\nRegards\n\nPavel", "msg_date": "Tue, 2 Apr 2024 08:46:16 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nfresh rebase\n\nRegards\n\nPavel", "msg_date": "Thu, 4 Apr 2024 07:46:05 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nfresh rebase", "msg_date": "Thu, 11 Apr 2024 07:34:08 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nfresh rebase\n\nRegards\n\nPavel", "msg_date": "Thu, 9 May 2024 08:45:13 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nonly rebase\n\nRegards\n\nPavel", "msg_date": "Fri, 17 May 2024 07:09:12 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On 2024-Jan-30, Dmitry Dolgov wrote:\n\n> Yep, in this constellation the implementation holds much better (in\n> terms of memory) in my create/let/drop testing.\n> \n> I've marked the CF item as ready for committer, but a note for anyone\n> who would like to pick up it from here -- we're talking about first 5\n> patches here, up to the memory cleaning after DROP VARIABLE. It doesn't\n> mean the rest is somehow not worth it, but I believe it's a good first\n> step.\n\nHmm, I think patch 16 is essential, because the point of variable shadowing\nis a critical aspect of how the whole thing works. So I would say that\na first step would be those first five patches plus 16.\n\nI want to note that when we discussed this patch series at the dev\nmeeting in FOSDEM, a sort-of conclusion was reached that we didn't want\nschema variables at all because of the fact that creating a variable\nwould potentially change the meaning of queries by shadowing table\ncolumns. But this turns out to be incorrect: it's _variables_ that are\nshadowed by table columns, not the other way around.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"No hay ausente sin culpa ni presente sin disculpa\" (Prov. francés)\n\n\n", "msg_date": "Sat, 18 May 2024 13:29:09 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "so 18. 5. 2024 v 18:31 odesílatel Alvaro Herrera <alvherre@alvh.no-ip.org>\nnapsal:\n\n> On 2024-Jan-30, Dmitry Dolgov wrote:\n>\n> > Yep, in this constellation the implementation holds much better (in\n> > terms of memory) in my create/let/drop testing.\n> >\n> > I've marked the CF item as ready for committer, but a note for anyone\n> > who would like to pick up it from here -- we're talking about first 5\n> > patches here, up to the memory cleaning after DROP VARIABLE. It doesn't\n> > mean the rest is somehow not worth it, but I believe it's a good first\n> > step.\n>\n> Hmm, I think patch 16 is essential, because the point of variable shadowing\n> is a critical aspect of how the whole thing works. So I would say that\n> a first step would be those first five patches plus 16.\n>\n\nI'll move patch 16 to 6 position\n\nRegards\n\nPavel\n\n>\n> I want to note that when we discussed this patch series at the dev\n> meeting in FOSDEM, a sort-of conclusion was reached that we didn't want\n> schema variables at all because of the fact that creating a variable\n> would potentially change the meaning of queries by shadowing table\n> columns. But this turns out to be incorrect: it's _variables_ that are\n> shadowed by table columns, not the other way around.\n>\n> --\n> Álvaro Herrera PostgreSQL Developer —\n> https://www.EnterpriseDB.com/\n> \"No hay ausente sin culpa ni presente sin disculpa\" (Prov. francés)\n>\n\nso 18. 5. 2024 v 18:31 odesílatel Alvaro Herrera <alvherre@alvh.no-ip.org> napsal:On 2024-Jan-30, Dmitry Dolgov wrote:\n\n> Yep, in this constellation the implementation holds much better (in\n> terms of memory) in my create/let/drop testing.\n> \n> I've marked the CF item as ready for committer, but a note for anyone\n> who would like to pick up it from here -- we're talking about first 5\n> patches here, up to the memory cleaning after DROP VARIABLE. It doesn't\n> mean the rest is somehow not worth it, but I believe it's a good first\n> step.\n\nHmm, I think patch 16 is essential, because the point of variable shadowing\nis a critical aspect of how the whole thing works.  So I would say that\na first step would be those first five patches plus 16.I'll move patch 16 to 6 positionRegardsPavel \n\nI want to note that when we discussed this patch series at the dev\nmeeting in FOSDEM, a sort-of conclusion was reached that we didn't want\nschema variables at all because of the fact that creating a variable\nwould potentially change the meaning of queries by shadowing table\ncolumns.  But this turns out to be incorrect: it's _variables_ that are\nshadowed by table columns, not the other way around.\n\n-- \nÁlvaro Herrera         PostgreSQL Developer  —  https://www.EnterpriseDB.com/\n\"No hay ausente sin culpa ni presente sin disculpa\" (Prov. francés)", "msg_date": "Mon, 20 May 2024 09:11:22 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\npo 20. 5. 2024 v 9:11 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> so 18. 5. 2024 v 18:31 odesílatel Alvaro Herrera <alvherre@alvh.no-ip.org>\n> napsal:\n>\n>> On 2024-Jan-30, Dmitry Dolgov wrote:\n>>\n>> > Yep, in this constellation the implementation holds much better (in\n>> > terms of memory) in my create/let/drop testing.\n>> >\n>> > I've marked the CF item as ready for committer, but a note for anyone\n>> > who would like to pick up it from here -- we're talking about first 5\n>> > patches here, up to the memory cleaning after DROP VARIABLE. It doesn't\n>> > mean the rest is somehow not worth it, but I believe it's a good first\n>> > step.\n>>\n>> Hmm, I think patch 16 is essential, because the point of variable\n>> shadowing\n>> is a critical aspect of how the whole thing works. So I would say that\n>> a first step would be those first five patches plus 16.\n>>\n>\n> I'll move patch 16 to 6 position\n>\n\nreorderd set of patches - I moved forward plpgsql-tests.patch and\nGUC-session_variables_ambiguity_warning.patch\n\n0006-plpgsql-tests.patch\n0007-GUC-session_variables_ambiguity_warning.patch\n\nno other changes\n\nRegards\n\nPavel\n\n\n>\n> Regards\n>\n> Pavel\n>\n>>\n>> I want to note that when we discussed this patch series at the dev\n>> meeting in FOSDEM, a sort-of conclusion was reached that we didn't want\n>> schema variables at all because of the fact that creating a variable\n>> would potentially change the meaning of queries by shadowing table\n>> columns. But this turns out to be incorrect: it's _variables_ that are\n>> shadowed by table columns, not the other way around.\n>>\n>> --\n>> Álvaro Herrera PostgreSQL Developer —\n>> https://www.EnterpriseDB.com/\n>> \"No hay ausente sin culpa ni presente sin disculpa\" (Prov. francés)\n>>\n>", "msg_date": "Tue, 21 May 2024 23:14:43 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On 18.05.24 13:29, Alvaro Herrera wrote:\n> I want to note that when we discussed this patch series at the dev\n> meeting in FOSDEM, a sort-of conclusion was reached that we didn't want\n> schema variables at all because of the fact that creating a variable\n> would potentially change the meaning of queries by shadowing table\n> columns. But this turns out to be incorrect: it's_variables_ that are\n> shadowed by table columns, not the other way around.\n\nBut that's still bad, because seemingly unrelated schema changes can \nmake variables appear and disappear. For example, if you have\n\nSELECT a, b FROM table1\n\nand then you drop column b, maybe the above query continues to work \nbecause there is also a variable b. Or maybe it now does different \nthings because b is of a different type. This all has the potential to \nbe very confusing.\n\n\n\n", "msg_date": "Wed, 22 May 2024 14:37:49 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "> On Wed, May 22, 2024 at 02:37:49PM +0200, Peter Eisentraut wrote:\n> On 18.05.24 13:29, Alvaro Herrera wrote:\n> > I want to note that when we discussed this patch series at the dev\n> > meeting in FOSDEM, a sort-of conclusion was reached that we didn't want\n> > schema variables at all because of the fact that creating a variable\n> > would potentially change the meaning of queries by shadowing table\n> > columns. But this turns out to be incorrect: it's_variables_ that are\n> > shadowed by table columns, not the other way around.\n>\n> But that's still bad, because seemingly unrelated schema changes can make\n> variables appear and disappear. For example, if you have\n>\n> SELECT a, b FROM table1\n>\n> and then you drop column b, maybe the above query continues to work because\n> there is also a variable b. Or maybe it now does different things because b\n> is of a different type. This all has the potential to be very confusing.\n\nYeah, that's a bummer. Interestingly enough, the db2 implementation of\nglobal session variables mechanism is mentioned as similar to what we\nhave in the patch. But weirdly, the db2 documentation just states\npossibility of a resolution conflict for unqualified names, nothing\nelse.\n\nThere was extensive discussion about this problem early in the thread,\nand one alternative is to use some sort of special syntax every time\nwhen working with a variable to clear any ambiguity [1]. It's more\nverbose, has to be careful to not block some useful syntax for other\nstuff, etc. But as Pavel said:\n\n> The different syntax disallows any collision well, it is far to what is\n> more usual standard in this area. And if we introduce special syntax, then\n> there is no way back. We cannot use :varname - this syntax is used already,\n> but we can use, theoretically, @var or $var. But, personally, I don't want\n> to use it, if there is possibility to do without it.\n\nIt seems to me there is no other possibility to resolve those ambiguity\nissues.\n\n[1]: https://www.postgresql.org/message-id/CAFj8pRD03hwZK%2B541KDt4Eo5YuC81CBBX_P0Sa5A7g5TQFsTww%40mail.gmail.com\n\n\n", "msg_date": "Wed, 22 May 2024 16:14:26 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Peter Eisentraut <peter@eisentraut.org> writes:\n> On 18.05.24 13:29, Alvaro Herrera wrote:\n>> I want to note that when we discussed this patch series at the dev\n>> meeting in FOSDEM, a sort-of conclusion was reached that we didn't want\n>> schema variables at all because of the fact that creating a variable\n>> would potentially change the meaning of queries by shadowing table\n>> columns. But this turns out to be incorrect: it's_variables_ that are\n>> shadowed by table columns, not the other way around.\n\n> But that's still bad, because seemingly unrelated schema changes can \n> make variables appear and disappear. For example, if you have\n> \tSELECT a, b FROM table1\n> and then you drop column b, maybe the above query continues to work \n> because there is also a variable b.\n\nYeah, that seems pretty dangerous. Could we make it safe enough\nby requiring some qualification on variable names? That is, if\nyou mean b to be a variable, then you must write something like\n\n\tSELECT a, pg_variables.b FROM table1\n\nThis is still ambiguous if you use \"pg_variables\" as a table alias in\nthe query, but the alias would win so the query still means what it\nmeant before. Also, table aliases (as opposed to actual table names)\ndon't change readily, so I don't think there's much risk of the query\nsuddenly meaning something different than it did yesterday.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 May 2024 13:25:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On 2024-May-22, Dmitry Dolgov wrote:\n\n> Yeah, that's a bummer. Interestingly enough, the db2 implementation of\n> global session variables mechanism is mentioned as similar to what we\n> have in the patch. But weirdly, the db2 documentation just states\n> possibility of a resolution conflict for unqualified names, nothing\n> else.\n\nPerhaps the solution to all this is to avoid having the variables be\nimplicitly present in the range table of all queries. Instead, if you\nneed a variable's value, then you need to add the variable to the FROM\nclause; and if you try to read from the variable and the name conflicts\nwith that of a column in one of the tables in the FROM clause, then you\nget an error that the name is ambiguous and invites to qualify it.\nLike, for instance,\n\ncreate table lefttab (a int, b int);\ncreate table righttab (c int, d int, b int);\n\n=# select b from lefttab, righttab;\nERROR: column reference \"b\" is ambiguous\nLÍNEA 1: select b from lefttab, righttab;\n ^\n\nbut this works fine because there's no longer an ambiguity:\n\nselect lefttab.b from lefttab, righttab;\n b \n───\n(0 filas)\n\n\nNothing breaks if you create new variables, because your queries won't\nsee them until you explicitly request them. And if you add add columns\nto either tables or variables, it's possible that some queries would\nstart having ambiguous references, in which case they'll just stop\nworking until you disambiguate by editing the query.\n\n\nNow, Pavel has been saying that variables are simple and cannot break\nqueries (because they're always shadowed), which is why they're always\nimplicitly visible to all queries[1]; but maybe that's a mistake.\n\n[1] https://postgr.es/m/CAFj8pRA2P7uaFGpFJxVHrHFtizBCN41J00BrEotspdD+urGBLQ@mail.gmail.com\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La experiencia nos dice que el hombre peló millones de veces las patatas,\npero era forzoso admitir la posibilidad de que en un caso entre millones,\nlas patatas pelarían al hombre\" (Ijon Tichy)\n\n\n", "msg_date": "Wed, 22 May 2024 19:27:56 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Alvaro Herrera:\n> Perhaps the solution to all this is to avoid having the variables be\n> implicitly present in the range table of all queries. Instead, if you\n> need a variable's value, then you need to add the variable to the FROM\n> clause;\n\n+1\n\nThis should make it easier to work with composite type schema variables \nin some cases. It could also enable schema qualifying of schema \nvariables, or at least make it easier to do, I think.\n\nIn this case variables would share the same namespace as tables and \nviews, right? So I could not create a variable with the same name as \nanother table. Which is a good thing, I guess. Not sure how it's \ncurrently implemented in the patch.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Wed, 22 May 2024 20:21:08 +0200", "msg_from": "walther@technowledgy.de", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "st 22. 5. 2024 v 14:37 odesílatel Peter Eisentraut <peter@eisentraut.org>\nnapsal:\n\n> On 18.05.24 13:29, Alvaro Herrera wrote:\n> > I want to note that when we discussed this patch series at the dev\n> > meeting in FOSDEM, a sort-of conclusion was reached that we didn't want\n> > schema variables at all because of the fact that creating a variable\n> > would potentially change the meaning of queries by shadowing table\n> > columns. But this turns out to be incorrect: it's_variables_ that are\n> > shadowed by table columns, not the other way around.\n>\n> But that's still bad, because seemingly unrelated schema changes can\n> make variables appear and disappear. For example, if you have\n>\n> SELECT a, b FROM table1\n>\n> and then you drop column b, maybe the above query continues to work\n> because there is also a variable b. Or maybe it now does different\n> things because b is of a different type. This all has the potential to\n> be very confusing.\n>\n\nIn the described case, the variable's shadowing warning will be raised.\n\nThere are more cases where not well designed changes (just with tables) can\nbreak queries or change results. Adding columns can be a potential risk,\ncreating tables or dropping tables (when the search path contains more\nschemas) too.\n\nGood practice is using well designed names and almost all use aliases or\nlabels, and it is one way to minimize real risks. Personally I prefer a\nvery strict mode that disallows shadowing, conflicts, ... but on second\nhand, for some usual work this strict mode can be boring, so we should find\nsome good compromise.\n\nRegards\n\nPavel\n\nst 22. 5. 2024 v 14:37 odesílatel Peter Eisentraut <peter@eisentraut.org> napsal:On 18.05.24 13:29, Alvaro Herrera wrote:\n> I want to note that when we discussed this patch series at the dev\n> meeting in FOSDEM, a sort-of conclusion was reached that we didn't want\n> schema variables at all because of the fact that creating a variable\n> would potentially change the meaning of queries by shadowing table\n> columns.  But this turns out to be incorrect: it's_variables_  that are\n> shadowed by table columns, not the other way around.\n\nBut that's still bad, because seemingly unrelated schema changes can \nmake variables appear and disappear.  For example, if you have\n\nSELECT a, b FROM table1\n\nand then you drop column b, maybe the above query continues to work \nbecause there is also a variable b.  Or maybe it now does different \nthings because b is of a different type.  This all has the potential to \nbe very confusing.In the described case, the variable's shadowing warning will be raised. There are more cases where not well designed changes (just with tables) can break queries or change results. Adding columns can be a potential risk, creating tables or dropping tables (when the search path contains more schemas) too.Good practice is using well designed names and almost all use aliases or labels, and it is one way to minimize real risks. Personally I prefer a very strict mode that disallows shadowing, conflicts, ... but on second hand, for some usual work this strict mode can be boring, so we should find some good compromise. RegardsPavel", "msg_date": "Wed, 22 May 2024 20:33:46 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "st 22. 5. 2024 v 19:25 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Peter Eisentraut <peter@eisentraut.org> writes:\n> > On 18.05.24 13:29, Alvaro Herrera wrote:\n> >> I want to note that when we discussed this patch series at the dev\n> >> meeting in FOSDEM, a sort-of conclusion was reached that we didn't want\n> >> schema variables at all because of the fact that creating a variable\n> >> would potentially change the meaning of queries by shadowing table\n> >> columns. But this turns out to be incorrect: it's_variables_ that are\n> >> shadowed by table columns, not the other way around.\n>\n> > But that's still bad, because seemingly unrelated schema changes can\n> > make variables appear and disappear. For example, if you have\n> > SELECT a, b FROM table1\n> > and then you drop column b, maybe the above query continues to work\n> > because there is also a variable b.\n>\n> Yeah, that seems pretty dangerous. Could we make it safe enough\n> by requiring some qualification on variable names? That is, if\n> you mean b to be a variable, then you must write something like\n>\n> SELECT a, pg_variables.b FROM table1\n>\n> This is still ambiguous if you use \"pg_variables\" as a table alias in\n> the query, but the alias would win so the query still means what it\n> meant before. Also, table aliases (as opposed to actual table names)\n> don't change readily, so I don't think there's much risk of the query\n> suddenly meaning something different than it did yesterday.\n>\n\nWith active shadowing variable warning for described example you will get a\nwarning before dropping.\n\nSession variables are joined with schema (in my proposal). Do anybody can\ndo just\n\nCREATE SCHEMA svars; -- or what (s)he likes\nCREATE VARIABLE svars.b AS int;\n\nSELECT a, b FROM table1\n\nand if somebody can be really safe, the can write\n\nSELECT t.a, t.b FROM table1 t\n\nor\n\nSELECT t.a, svars.b FROM table1 t\n\nIt can be customized in the way anybody prefers - just creating dedicated\nschemas and setting search_path. Using its own schema for session variables\nwithout enhancing search_path for this schema forces the necessity to set\nonly qualified names for session variables.\n\nSure the naming of schemas, aliases can be unhappy wrong, and there can be\nthe problem. But this can be a problem today too.\n\nRegards\n\nPavel\n\n\n>\n> regards, tom lane\n>\n\nst 22. 5. 2024 v 19:25 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Peter Eisentraut <peter@eisentraut.org> writes:\n> On 18.05.24 13:29, Alvaro Herrera wrote:\n>> I want to note that when we discussed this patch series at the dev\n>> meeting in FOSDEM, a sort-of conclusion was reached that we didn't want\n>> schema variables at all because of the fact that creating a variable\n>> would potentially change the meaning of queries by shadowing table\n>> columns.  But this turns out to be incorrect: it's_variables_  that are\n>> shadowed by table columns, not the other way around.\n\n> But that's still bad, because seemingly unrelated schema changes can \n> make variables appear and disappear.  For example, if you have\n>       SELECT a, b FROM table1\n> and then you drop column b, maybe the above query continues to work \n> because there is also a variable b.\n\nYeah, that seems pretty dangerous.  Could we make it safe enough\nby requiring some qualification on variable names?  That is, if\nyou mean b to be a variable, then you must write something like\n\n        SELECT a, pg_variables.b FROM table1\n\nThis is still ambiguous if you use \"pg_variables\" as a table alias in\nthe query, but the alias would win so the query still means what it\nmeant before.  Also, table aliases (as opposed to actual table names)\ndon't change readily, so I don't think there's much risk of the query\nsuddenly meaning something different than it did yesterday.With active shadowing variable warning for described example you will get a warning before dropping.Session variables are joined with schema (in my proposal). Do anybody can do justCREATE SCHEMA svars; -- or what (s)he likesCREATE VARIABLE svars.b AS int;SELECT a, b FROM table1and if somebody can be really safe, the can writeSELECT t.a, t.b FROM table1 tor SELECT t.a, svars.b FROM table1 tIt can be customized in the way anybody prefers - just creating dedicated schemas and setting search_path. Using its own schema for session variables without enhancing search_path for this schema forces the necessity to set only qualified names for session variables.Sure the naming of schemas, aliases can be unhappy wrong, and there can be the problem. But this can be a problem today too.RegardsPavel \n\n                        regards, tom lane", "msg_date": "Wed, 22 May 2024 20:44:28 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "st 22. 5. 2024 v 20:21 odesílatel <walther@technowledgy.de> napsal:\n\n> Alvaro Herrera:\n> > Perhaps the solution to all this is to avoid having the variables be\n> > implicitly present in the range table of all queries. Instead, if you\n> > need a variable's value, then you need to add the variable to the FROM\n> > clause;\n>\n> +1\n>\n> This should make it easier to work with composite type schema variables\n> in some cases. It could also enable schema qualifying of schema\n> variables, or at least make it easier to do, I think.\n>\n> In this case variables would share the same namespace as tables and\n> views, right? So I could not create a variable with the same name as\n> another table. Which is a good thing, I guess. Not sure how it's\n> currently implemented in the patch.\n>\n\nI don't like this. Sure, this fixes the problem with collisions, but then\nwe cannot talk about variables. When some is used like a table, then it\nshould be a table. I can imagine memory tables, but it is a different type\nof object. Table is relation, variable is just value. Variables should not\nhave columns, so using the same patterns for tables and variables has no\nsense. Using the same catalog for variables and tables. Variables just hold\na value, and then you can use it inside a query without necessity to write\nJOIN. Variables are not tables, and then it is not too confusing so they\nare not transactional and don't support more rows, more columns.\n\nThe problem with collision can be solved very easily - just use a dedicated\nschema (only for variables) and don't use it in the search path.\n\nIn this case, the unwanted collision is not too probable - although it is\npossible, if you use a schema name for a variable same like table name or\nalias name.\n\nI can use\n\nCREATE SCHEMA __;\nCREATE VARIABLE __.a AS int;\n\nSELECT __.a;\n\nalthough it is maybe wild, probably nobody will use alias or table name __\nand then there should not be any problem\n\n\n\n\n\n\n\n>\n> Best,\n>\n> Wolfgang\n>\n\nst 22. 5. 2024 v 20:21 odesílatel <walther@technowledgy.de> napsal:Alvaro Herrera:\n> Perhaps the solution to all this is to avoid having the variables be\n> implicitly present in the range table of all queries.  Instead, if you\n> need a variable's value, then you need to add the variable to the FROM\n> clause;\n\n+1\n\nThis should make it easier to work with composite type schema variables \nin some cases.  It could also enable schema qualifying of schema \nvariables, or at least make it easier to do, I think.\n\nIn this case variables would share the same namespace as tables and \nviews, right?  So I could not create a variable with the same name as \nanother table.  Which is a good thing, I guess.  Not sure how it's \ncurrently implemented in the patch.I don't like this. Sure, this fixes the problem with collisions, but then we cannot talk about variables. When some is used like a table, then it should be a table. I can imagine memory tables, but it is a different type of object. Table is relation, variable is just value. Variables should not have columns, so using the same patterns for tables and variables has no sense. Using the same catalog for variables and tables. Variables just hold a value, and then you can use it inside a query without necessity to write JOIN. Variables are not tables, and then it is not too confusing so they are not transactional and don't support more rows, more columns.The problem with collision can be solved very easily - just use a dedicated schema (only for variables) and don't use it in the search path.In this case, the unwanted collision is not too probable - although it is possible, if you use a schema name for a variable same like table name or alias name. I can useCREATE SCHEMA __;CREATE VARIABLE __.a AS int;SELECT __.a;although it is maybe wild, probably nobody will use alias or table name __ and then there should not be any problem \n\nBest,\n\nWolfgang", "msg_date": "Wed, 22 May 2024 21:09:54 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "st 22. 5. 2024 v 14:37 odesílatel Peter Eisentraut <peter@eisentraut.org>\nnapsal:\n\n> On 18.05.24 13:29, Alvaro Herrera wrote:\n> > I want to note that when we discussed this patch series at the dev\n> > meeting in FOSDEM, a sort-of conclusion was reached that we didn't want\n> > schema variables at all because of the fact that creating a variable\n> > would potentially change the meaning of queries by shadowing table\n> > columns. But this turns out to be incorrect: it's_variables_ that are\n> > shadowed by table columns, not the other way around.\n>\n> But that's still bad, because seemingly unrelated schema changes can\n> make variables appear and disappear. For example, if you have\n>\n> SELECT a, b FROM table1\n>\n> and then you drop column b, maybe the above query continues to work\n> because there is also a variable b. Or maybe it now does different\n> things because b is of a different type. This all has the potential to\n> be very confusing.\n>\n\nThe detection of possible conflicts works well (in or outside PL too)\n\ncreate variable x as int;\ncreate table foo(x int);\ninsert into foo values(110);\n\nset session_variables_ambiguity_warning to on;\n\n(2024-05-23 08:22:34) postgres=# do $$\n\nbegin\n raise notice '%', (select x from foo);\nend;\n$$;\nWARNING: session variable \"x\" is shadowed\nLINE 1: (select x from foo)\n ^\nDETAIL: Session variables can be shadowed by columns, routine's variables\nand routine's arguments with the same name.\nQUERY: (select x from foo)\nNOTICE: 110\nDO\n(2024-05-23 08:22:35) postgres=# do $$ declare x int default 100;\nbegin\n raise notice '%', x;\nend;\n$$;\nWARNING: session variable \"x\" is shadowed\nLINE 1: x\n ^\nDETAIL: Session variables can be shadowed by columns, routine's variables\nand routine's arguments with the same name.\nQUERY: x\nNOTICE: 100\nDO\n\nst 22. 5. 2024 v 14:37 odesílatel Peter Eisentraut <peter@eisentraut.org> napsal:On 18.05.24 13:29, Alvaro Herrera wrote:\n> I want to note that when we discussed this patch series at the dev\n> meeting in FOSDEM, a sort-of conclusion was reached that we didn't want\n> schema variables at all because of the fact that creating a variable\n> would potentially change the meaning of queries by shadowing table\n> columns.  But this turns out to be incorrect: it's_variables_  that are\n> shadowed by table columns, not the other way around.\n\nBut that's still bad, because seemingly unrelated schema changes can \nmake variables appear and disappear.  For example, if you have\n\nSELECT a, b FROM table1\n\nand then you drop column b, maybe the above query continues to work \nbecause there is also a variable b.  Or maybe it now does different \nthings because b is of a different type.  This all has the potential to \nbe very confusing.The detection of possible conflicts works well (in or outside PL too)create variable x as int;create table foo(x int);insert into foo values(110);set session_variables_ambiguity_warning to on;(2024-05-23 08:22:34) postgres=# do $$                                         begin  raise notice '%', (select x from foo);end;$$;WARNING:  session variable \"x\" is shadowedLINE 1: (select x from foo)                ^DETAIL:  Session variables can be shadowed by columns, routine's variables and routine's arguments with the same name.QUERY:  (select x from foo)NOTICE:  110DO(2024-05-23 08:22:35) postgres=# do $$ declare x int default 100;begin  raise notice '%', x;end;$$;WARNING:  session variable \"x\" is shadowedLINE 1: x        ^DETAIL:  Session variables can be shadowed by columns, routine's variables and routine's arguments with the same name.QUERY:  xNOTICE:  100DO", "msg_date": "Thu, 23 May 2024 08:30:14 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nst 22. 5. 2024 v 16:14 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\nnapsal:\n\n> > On Wed, May 22, 2024 at 02:37:49PM +0200, Peter Eisentraut wrote:\n> > On 18.05.24 13:29, Alvaro Herrera wrote:\n> > > I want to note that when we discussed this patch series at the dev\n> > > meeting in FOSDEM, a sort-of conclusion was reached that we didn't want\n> > > schema variables at all because of the fact that creating a variable\n> > > would potentially change the meaning of queries by shadowing table\n> > > columns. But this turns out to be incorrect: it's_variables_ that are\n> > > shadowed by table columns, not the other way around.\n> >\n> > But that's still bad, because seemingly unrelated schema changes can make\n> > variables appear and disappear. For example, if you have\n> >\n> > SELECT a, b FROM table1\n> >\n> > and then you drop column b, maybe the above query continues to work\n> because\n> > there is also a variable b. Or maybe it now does different things\n> because b\n> > is of a different type. This all has the potential to be very confusing.\n>\n> Yeah, that's a bummer. Interestingly enough, the db2 implementation of\n> global session variables mechanism is mentioned as similar to what we\n> have in the patch. But weirdly, the db2 documentation just states\n> possibility of a resolution conflict for unqualified names, nothing\n> else.\n>\n\nI found document https://www.ibm.com/docs/it/i/7.3?topic=variables-global\n\nIf I understand well, then the same rules are applied for qualified or not\nqualified identifiers (when there is a conflict), and the variables have\nlow priority.\n\nThe db2 has the possibility to compile objects, and it can block the usage\nvariables created after compilation - (if I understand well the described\nbehaviour).\n\nRegards\n\nPavel\n\nHist 22. 5. 2024 v 16:14 odesílatel Dmitry Dolgov <9erthalion6@gmail.com> napsal:> On Wed, May 22, 2024 at 02:37:49PM +0200, Peter Eisentraut wrote:\n> On 18.05.24 13:29, Alvaro Herrera wrote:\n> > I want to note that when we discussed this patch series at the dev\n> > meeting in FOSDEM, a sort-of conclusion was reached that we didn't want\n> > schema variables at all because of the fact that creating a variable\n> > would potentially change the meaning of queries by shadowing table\n> > columns.  But this turns out to be incorrect: it's_variables_  that are\n> > shadowed by table columns, not the other way around.\n>\n> But that's still bad, because seemingly unrelated schema changes can make\n> variables appear and disappear.  For example, if you have\n>\n> SELECT a, b FROM table1\n>\n> and then you drop column b, maybe the above query continues to work because\n> there is also a variable b.  Or maybe it now does different things because b\n> is of a different type.  This all has the potential to be very confusing.\n\nYeah, that's a bummer. Interestingly enough, the db2 implementation of\nglobal session variables mechanism is mentioned as similar to what we\nhave in the patch. But weirdly, the db2 documentation just states\npossibility of a resolution conflict for unqualified names, nothing\nelse.I found document https://www.ibm.com/docs/it/i/7.3?topic=variables-globalIf I understand well, then the same rules are applied for qualified or not qualified identifiers (when there is a conflict), and the variables have low priority. The db2 has the possibility to compile objects, and it can block the usage variables created after compilation - (if I understand well the described behaviour). RegardsPavel", "msg_date": "Thu, 23 May 2024 21:46:31 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "> On Wed, May 22, 2024 at 08:44:28PM +0200, Pavel Stehule wrote:\n> st 22. 5. 2024 v 19:25 odes�latel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>\n> > Peter Eisentraut <peter@eisentraut.org> writes:\n> > > On 18.05.24 13:29, Alvaro Herrera wrote:\n> > >> I want to note that when we discussed this patch series at the dev\n> > >> meeting in FOSDEM, a sort-of conclusion was reached that we didn't want\n> > >> schema variables at all because of the fact that creating a variable\n> > >> would potentially change the meaning of queries by shadowing table\n> > >> columns. But this turns out to be incorrect: it's_variables_ that are\n> > >> shadowed by table columns, not the other way around.\n> >\n> > > But that's still bad, because seemingly unrelated schema changes can\n> > > make variables appear and disappear. For example, if you have\n> > > SELECT a, b FROM table1\n> > > and then you drop column b, maybe the above query continues to work\n> > > because there is also a variable b.\n> >\n> > Yeah, that seems pretty dangerous. Could we make it safe enough\n> > by requiring some qualification on variable names? That is, if\n> > you mean b to be a variable, then you must write something like\n> >\n> > SELECT a, pg_variables.b FROM table1\n> >\n> > This is still ambiguous if you use \"pg_variables\" as a table alias in\n> > the query, but the alias would win so the query still means what it\n> > meant before. Also, table aliases (as opposed to actual table names)\n> > don't change readily, so I don't think there's much risk of the query\n> > suddenly meaning something different than it did yesterday.\n> >\n>\n> With active shadowing variable warning for described example you will get a\n> warning before dropping.\n\nI assume you're talking about a warning, which one will get querying the\ntable with shadowed columns. If no such query has happened yet and the\ncolumn was dropped, there will be no warning.\n\nAside that, I'm afraid dropping a warning in log does not have\nsufficient visibility to warn about the issue, since one needs to read\nthose logs first. I guess what folks are looking for is more constraints\nout of the box, preventing any ambiguity.\n\n> Session variables are joined with schema (in my proposal). Do anybody can\n> do just\n>\n> CREATE SCHEMA svars; -- or what (s)he likes\n> CREATE VARIABLE svars.b AS int;\n>\n> SELECT a, b FROM table1\n>\n> and if somebody can be really safe, the can write\n>\n> SELECT t.a, t.b FROM table1 t\n>\n> or\n>\n> SELECT t.a, svars.b FROM table1 t\n>\n> It can be customized in the way anybody prefers - just creating dedicated\n> schemas and setting search_path. Using its own schema for session variables\n> without enhancing search_path for this schema forces the necessity to set\n> only qualified names for session variables.\n>\n> Sure the naming of schemas, aliases can be unhappy wrong, and there can be\n> the problem. But this can be a problem today too.\n\nIf I understand you correctly, you're saying that there are \"best\npractices\" how to deal with session variables to avoid any potential\nissues. But I think it's more user-friendly to have something that will\nnot allow shooting yourself in the foot right out of the box. You're\nright, similar things could probably happen with the already existing\nfunctionality, but it doesn't give us rights to add more to it.\nEspecially if it's going to be about a brand-new feature.\n\nAs far as I can see now, it's a major design flaw that could keep the\npatch from being accepted. Fortunately there are few good proposals how\nto address this, folks are genuinely trying to help. What do you think\nabout trying some of them out, as an alternative approach, to compare\nfunctionality and user experience?\n\nIn the meantime I'm afraid I have to withdraw \"Ready for committer\"\nstatus, sorry. I've clearly underestimated the importance of variables\nshadowing, thanks Alvaro and Peter for pointing out some dangerous\ncases. I still believe though that the majority of the patch is in a\ngood shape and the question about variables shadowing is the only thing\nthat keeps it from moving forward.\n\n\n", "msg_date": "Fri, 24 May 2024 13:31:43 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\npá 24. 5. 2024 v 13:32 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\nnapsal:\n\n> > On Wed, May 22, 2024 at 08:44:28PM +0200, Pavel Stehule wrote:\n> > st 22. 5. 2024 v 19:25 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n> >\n> > > Peter Eisentraut <peter@eisentraut.org> writes:\n> > > > On 18.05.24 13:29, Alvaro Herrera wrote:\n> > > >> I want to note that when we discussed this patch series at the dev\n> > > >> meeting in FOSDEM, a sort-of conclusion was reached that we didn't\n> want\n> > > >> schema variables at all because of the fact that creating a variable\n> > > >> would potentially change the meaning of queries by shadowing table\n> > > >> columns. But this turns out to be incorrect: it's_variables_ that\n> are\n> > > >> shadowed by table columns, not the other way around.\n> > >\n> > > > But that's still bad, because seemingly unrelated schema changes can\n> > > > make variables appear and disappear. For example, if you have\n> > > > SELECT a, b FROM table1\n> > > > and then you drop column b, maybe the above query continues to work\n> > > > because there is also a variable b.\n> > >\n> > > Yeah, that seems pretty dangerous. Could we make it safe enough\n> > > by requiring some qualification on variable names? That is, if\n> > > you mean b to be a variable, then you must write something like\n> > >\n> > > SELECT a, pg_variables.b FROM table1\n> > >\n> > > This is still ambiguous if you use \"pg_variables\" as a table alias in\n> > > the query, but the alias would win so the query still means what it\n> > > meant before. Also, table aliases (as opposed to actual table names)\n> > > don't change readily, so I don't think there's much risk of the query\n> > > suddenly meaning something different than it did yesterday.\n> > >\n> >\n> > With active shadowing variable warning for described example you will\n> get a\n> > warning before dropping.\n>\n> I assume you're talking about a warning, which one will get querying the\n> table with shadowed columns. If no such query has happened yet and the\n> column was dropped, there will be no warning.\n>\n\nsure - the possible identifier collision cannot be solved in SQL perfectly.\nIt is the same with tables.\nWhen I add badly named column to table, I'll get an error \"ambiguous\ncolumns\" just when I'll execute\nquery. The system catalog just cannot protect against collisions - it is\ntrue for columns, variables, tables.\nLittle bit protected are views, that are stored in parsed format, but any\nother object can be broken when\nsomebody choose bad names in catalog or queries. There is not any\nprotection.\n\n\n>\n> Aside that, I'm afraid dropping a warning in log does not have\n> sufficient visibility to warn about the issue, since one needs to read\n> those logs first. I guess what folks are looking for is more constraints\n> out of the box, preventing any ambiguity.\n>\n\nWe can increase (optionality) the level of this message to error. It is not\nperfect, but it can work well.\n\nI think so there is not higher risk with variables than current risk with\njust tables.\n\na) the possibility to create variables is limited by rights on schema. So\nnobody can create variables without necessary rights (invisibly)\n\nb) if user has own schema with CREATE right, then it can create variables\njust for self, and with default setting, just visible for self,\nand just accessible for self. When other users try to use these variables,\nthen the query fails due to missing access rights (usually).\nCommon user cannot to create variables in application schema and cannot to\nset search_path for applications.\n\nc) the changes of schema are usually tested on some testing stages before\nare applied on production. So when there\nwill be possible collision or some other defect, probably it will be\ndetected there. Untested changes of catalog on production is not too common\ntoday.\n\nd) any risk that can be related for variables, is related just to renaming\ncolumn or table.\n\n\n\n>\n> > Session variables are joined with schema (in my proposal). Do anybody can\n> > do just\n> >\n> > CREATE SCHEMA svars; -- or what (s)he likes\n> > CREATE VARIABLE svars.b AS int;\n> >\n> > SELECT a, b FROM table1\n> >\n> > and if somebody can be really safe, the can write\n> >\n> > SELECT t.a, t.b FROM table1 t\n> >\n> > or\n> >\n> > SELECT t.a, svars.b FROM table1 t\n> >\n> > It can be customized in the way anybody prefers - just creating dedicated\n> > schemas and setting search_path. Using its own schema for session\n> variables\n> > without enhancing search_path for this schema forces the necessity to set\n> > only qualified names for session variables.\n> >\n> > Sure the naming of schemas, aliases can be unhappy wrong, and there can\n> be\n> > the problem. But this can be a problem today too.\n>\n> If I understand you correctly, you're saying that there are \"best\n> practices\" how to deal with session variables to avoid any potential\n> issues. But I think it's more user-friendly to have something that will\n> not allow shooting yourself in the foot right out of the box. You're\n> right, similar things could probably happen with the already existing\n> functionality, but it doesn't give us rights to add more to it.\n> Especially if it's going to be about a brand-new feature.\n>\n\nUnfortunately, there is not any possibility - just in SQL (without\nintroduction of variables).\n\nExample - Tom's proposal using dedicated schema\n\nok - I can limit the possibility to create variables just for schema\n\"pg_var\"\n\nCREATE VARIABLE pg_var.a AS int;\n\nbut if somebody will write query like\n\nSELECT pg_var.a FROM tab pg_var\n\nthen we are back on start.\n\n\n\n>\n> As far as I can see now, it's a major design flaw that could keep the\n> patch from being accepted. Fortunately there are few good proposals how\n> to address this, folks are genuinely trying to help. What do you think\n> about trying some of them out, as an alternative approach, to compare\n> functionality and user experience?\n>\n\nIt is a design flaw of SQL. The issue we talk about is the generic property\nof SQL, and then you cannot fix it.\n\nI thought about possibility to introduce dedicated function\n\nsvalue(regvariable) returns any - with planner support\n\nand possibility to force usage of this function. Another possibility is\nusing some simple dedicated operator (syntax) for force using of variables\nso theoretically this can looks like:\n\nset strict_usage_of_session_variables to on;\nSELECT * FROM tab WHERE a = svalue('myvar.var');\nor\n\nSELECT * FROM tab WHERE a = @ myvar.var;\n\nThis can be really safe. Personally It is not my cup of tea, but I can live\nit (and this mode can be default).\n\nTheoretically we can limit usage of variables just for PL/pgSQL. It can\nreduce risks too, but it breaks usage variables for parametrization of DO\nblocks (what is my primary motivation), but it can be good enough to\nsupport migration from PL/SQL.\n\n\n>\n> In the meantime I'm afraid I have to withdraw \"Ready for committer\"\n> status, sorry. I've clearly underestimated the importance of variables\n> shadowing, thanks Alvaro and Peter for pointing out some dangerous\n> cases. I still believe though that the majority of the patch is in a\n> good shape and the question about variables shadowing is the only thing\n> that keeps it from moving forward.\n>\n\nI understand.\n\nI'll try to recapitulate my objections against proposed designs\n\na) using syntax like MS - DECLARE command and '@@' prefix - it is dynamic,\nso there is not possibility of static check. It is not joined with schema,\nso there are possible collisions between variables and and the end the\nvariables are named like @@mypackage_myvar - so some custom naming\nconvention is necessary too. There is not possibility to set access rights.\n\nb) using variables like MySQL - first usage define it, and access by '@'\nprefix. It is simple, but without possibility of static check. There is not\npossibility to set access rights.\n\nc) using variables with necessity to define it in FROM clause. It is safe,\nbut it can be less readable, when you use more variables, and it is not too\nreadable, and user friendly, because you need to write FROM. And can be\nmessy, because you usually will use variables in queries, and it is\nintroduce not relations into FROM clause. But I can imagine this mode as\nalternative syntax, but it is very unfriendly and not intuitive (I think).\nMore probably it doesn't fast execution in simple expression execution mode.\n\nd) my proposal - there is possibility of collisions, but consistent with\nnaming of database objects, allows set of access rights, allows static\nanalyze, consistent with PL/pgSQL and similar to PL/pgSQL.\n\nThere is not any other possibility. Any time this is war between be user\nfriendly, be readable, be correctly - but there is not perfect solution,\nbecause just SQL is not perfect. Almost all mentioned objections against\nproposed variables are valid just for tables and columns.\n\nRegards\n\nPavel\n\nHipá 24. 5. 2024 v 13:32 odesílatel Dmitry Dolgov <9erthalion6@gmail.com> napsal:> On Wed, May 22, 2024 at 08:44:28PM +0200, Pavel Stehule wrote:\n> st 22. 5. 2024 v 19:25 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>\n> > Peter Eisentraut <peter@eisentraut.org> writes:\n> > > On 18.05.24 13:29, Alvaro Herrera wrote:\n> > >> I want to note that when we discussed this patch series at the dev\n> > >> meeting in FOSDEM, a sort-of conclusion was reached that we didn't want\n> > >> schema variables at all because of the fact that creating a variable\n> > >> would potentially change the meaning of queries by shadowing table\n> > >> columns.  But this turns out to be incorrect: it's_variables_  that are\n> > >> shadowed by table columns, not the other way around.\n> >\n> > > But that's still bad, because seemingly unrelated schema changes can\n> > > make variables appear and disappear.  For example, if you have\n> > >       SELECT a, b FROM table1\n> > > and then you drop column b, maybe the above query continues to work\n> > > because there is also a variable b.\n> >\n> > Yeah, that seems pretty dangerous.  Could we make it safe enough\n> > by requiring some qualification on variable names?  That is, if\n> > you mean b to be a variable, then you must write something like\n> >\n> >         SELECT a, pg_variables.b FROM table1\n> >\n> > This is still ambiguous if you use \"pg_variables\" as a table alias in\n> > the query, but the alias would win so the query still means what it\n> > meant before.  Also, table aliases (as opposed to actual table names)\n> > don't change readily, so I don't think there's much risk of the query\n> > suddenly meaning something different than it did yesterday.\n> >\n>\n> With active shadowing variable warning for described example you will get a\n> warning before dropping.\n\nI assume you're talking about a warning, which one will get querying the\ntable with shadowed columns. If no such query has happened yet and the\ncolumn was dropped, there will be no warning.sure - the possible identifier collision cannot be solved in SQL perfectly. It is the same with tables.When I add badly named column to table, I'll get an error \"ambiguous  columns\" just when I'll executequery. The system catalog just cannot protect against collisions - it is true for columns, variables, tables.Little bit protected are views, that are stored in parsed format, but any other object can be broken whensomebody choose bad names in catalog or queries. There is not any protection. \n\nAside that, I'm afraid dropping a warning in log does not have\nsufficient visibility to warn about the issue, since one needs to read\nthose logs first. I guess what folks are looking for is more constraints\nout of the box, preventing any ambiguity.We can increase (optionality) the level of this message to error. It is not perfect, but it can work well.I think so there is not higher risk with variables than current risk with just tables. a) the possibility to create variables is limited by rights on schema. So nobody can create variables without necessary rights (invisibly) b) if user has own schema with CREATE right, then it can create variables just for self, and with default setting, just visible for self, and just accessible for self. When other users try to use these variables, then the query fails due to missing access rights (usually).Common user cannot to create variables in application schema and cannot to set search_path for applications.c) the changes of schema are usually tested on some testing stages before are applied on production. So when therewill be possible collision or some other defect, probably it will be detected there. Untested changes of catalog on production is not too commontoday.d) any risk that can be related for variables, is related just to renaming column or table.  \n\n> Session variables are joined with schema (in my proposal). Do anybody can\n> do just\n>\n> CREATE SCHEMA svars; -- or what (s)he likes\n> CREATE VARIABLE svars.b AS int;\n>\n> SELECT a, b FROM table1\n>\n> and if somebody can be really safe, the can write\n>\n> SELECT t.a, t.b FROM table1 t\n>\n> or\n>\n> SELECT t.a, svars.b FROM table1 t\n>\n> It can be customized in the way anybody prefers - just creating dedicated\n> schemas and setting search_path. Using its own schema for session variables\n> without enhancing search_path for this schema forces the necessity to set\n> only qualified names for session variables.\n>\n> Sure the naming of schemas, aliases can be unhappy wrong, and there can be\n> the problem. But this can be a problem today too.\n\nIf I understand you correctly, you're saying that there are \"best\npractices\" how to deal with session variables to avoid any potential\nissues. But I think it's more user-friendly to have something that will\nnot allow shooting yourself in the foot right out of the box. You're\nright, similar things could probably happen with the already existing\nfunctionality, but it doesn't give us rights to add more to it.\nEspecially if it's going to be about a brand-new feature.Unfortunately, there is not any possibility - just in SQL (without introduction of variables). Example - Tom's proposal using dedicated schemaok - I can limit the possibility to create variables just for schema \"pg_var\"CREATE VARIABLE pg_var.a AS int;but if somebody will write query likeSELECT pg_var.a FROM tab pg_varthen we are back on start. \n\nAs far as I can see now, it's a major design flaw that could keep the\npatch from being accepted. Fortunately there are few good proposals how\nto address this, folks are genuinely trying to help. What do you think\nabout trying some of them out, as an alternative approach, to compare\nfunctionality and user experience?It is a design flaw of SQL. The issue we talk about is the generic property of SQL, and then you cannot fix it. I thought about possibility to introduce dedicated function svalue(regvariable) returns any - with planner support and possibility to force usage of this function. Another possibility is using some simple dedicated operator (syntax) for force using of variablesso theoretically this can looks like:set strict_usage_of_session_variables to on;SELECT * FROM tab WHERE a = svalue('myvar.var');or SELECT * FROM tab WHERE a = @ myvar.var;This can be really safe. Personally It is not my cup of tea, but I can live it (and this mode can be default). Theoretically we can limit usage of variables just for PL/pgSQL. It can reduce risks too, but it breaks usage variables for parametrization of DO blocks (what is my primary motivation), but it can be good enough to support migration from PL/SQL. \n\nIn the meantime I'm afraid I have to withdraw \"Ready for committer\"\nstatus, sorry. I've clearly underestimated the importance of variables\nshadowing, thanks Alvaro and Peter for pointing out some dangerous\ncases. I still believe though that the majority of the patch is in a\ngood shape and the question about variables shadowing is the only thing\nthat keeps it from moving forward.I understand. I'll try to recapitulate my objections against proposed designsa) using syntax like MS - DECLARE command and '@@' prefix - it is dynamic, so there is not possibility of static check. It is not joined with schema, so there are possible collisions between variables and and the end the variables are named like @@mypackage_myvar - so some custom naming convention is necessary too. There is not possibility to set access rights.b) using variables like MySQL - first usage define it, and access by '@' prefix. It is simple, but without possibility of static check. There is not possibility to set access rights.c) using variables with necessity to define it in FROM clause. It is safe, but it can be less readable, when you use more variables, and it is not too readable, and user friendly, because you need to write FROM. And can be messy, because you usually will use variables in queries, and it is introduce not relations into FROM clause. But I can imagine this mode as alternative syntax, but it is very unfriendly and not intuitive (I think). More probably it doesn't fast execution in simple expression execution mode.d) my proposal - there is possibility of collisions, but consistent with naming of database objects, allows set of access rights, allows static analyze, consistent with PL/pgSQL and similar to PL/pgSQL.There is not any other possibility. Any time this is war between be user friendly, be readable, be correctly - but there is not perfect solution, because just SQL is not perfect. Almost all mentioned objections against proposed variables are valid just for tables and columns.RegardsPavel", "msg_date": "Fri, 24 May 2024 15:00:48 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": ">\n>\n>\n>\n>>\n>> As far as I can see now, it's a major design flaw that could keep the\n>> patch from being accepted. Fortunately there are few good proposals how\n>> to address this, folks are genuinely trying to help. What do you think\n>> about trying some of them out, as an alternative approach, to compare\n>> functionality and user experience?\n>>\n>\n> It is a design flaw of SQL. The issue we talk about is the generic\n> property of SQL, and then you cannot fix it.\n>\n> I thought about possibility to introduce dedicated function\n>\n> svalue(regvariable) returns any - with planner support\n>\n> and possibility to force usage of this function. Another possibility is\n> using some simple dedicated operator (syntax) for force using of variables\n> so theoretically this can looks like:\n>\n> set strict_usage_of_session_variables to on;\n> SELECT * FROM tab WHERE a = svalue('myvar.var');\n> or\n>\n> SELECT * FROM tab WHERE a = @ myvar.var;\n>\n> This can be really safe. Personally It is not my cup of tea, but I can\n> live it (and this mode can be default).\n>\n> Theoretically we can limit usage of variables just for PL/pgSQL. It can\n> reduce risks too, but it breaks usage variables for parametrization of DO\n> blocks (what is my primary motivation), but it can be good enough to\n> support migration from PL/SQL.\n>\n\nanother possibility can be disable / enable usage of session variables on\nsession level\n\nlike set enable_session_variable to on/off\n\nso when the application doesn't use session variables, and then session\nvariables can be disabled, but the user can enable it just for self for\nself session. Then the risk of unwanted usage of session variables can be\nzero. This is similar to discussion about login triggers. This mechanism\ncan be used for using session variables only in PL too.\n\n\n\n\n>\n>\n>>\n>> In the meantime I'm afraid I have to withdraw \"Ready for committer\"\n>> status, sorry. I've clearly underestimated the importance of variables\n>> shadowing, thanks Alvaro and Peter for pointing out some dangerous\n>> cases. I still believe though that the majority of the patch is in a\n>> good shape and the question about variables shadowing is the only thing\n>> that keeps it from moving forward.\n>>\n>\n> I understand.\n>\n> I'll try to recapitulate my objections against proposed designs\n>\n> a) using syntax like MS - DECLARE command and '@@' prefix - it is dynamic,\n> so there is not possibility of static check. It is not joined with schema,\n> so there are possible collisions between variables and and the end the\n> variables are named like @@mypackage_myvar - so some custom naming\n> convention is necessary too. There is not possibility to set access rights.\n>\n> b) using variables like MySQL - first usage define it, and access by '@'\n> prefix. It is simple, but without possibility of static check. There is not\n> possibility to set access rights.\n>\n> c) using variables with necessity to define it in FROM clause. It is safe,\n> but it can be less readable, when you use more variables, and it is not too\n> readable, and user friendly, because you need to write FROM. And can be\n> messy, because you usually will use variables in queries, and it is\n> introduce not relations into FROM clause. But I can imagine this mode as\n> alternative syntax, but it is very unfriendly and not intuitive (I think).\n> More probably it doesn't fast execution in simple expression execution mode.\n>\n> d) my proposal - there is possibility of collisions, but consistent with\n> naming of database objects, allows set of access rights, allows static\n> analyze, consistent with PL/pgSQL and similar to PL/pgSQL.\n>\n> There is not any other possibility. Any time this is war between be user\n> friendly, be readable, be correctly - but there is not perfect solution,\n> because just SQL is not perfect. Almost all mentioned objections against\n> proposed variables are valid just for tables and columns.\n>\n> Regards\n>\n> Pavel\n>\n>\n\n \n\nAs far as I can see now, it's a major design flaw that could keep the\npatch from being accepted. Fortunately there are few good proposals how\nto address this, folks are genuinely trying to help. What do you think\nabout trying some of them out, as an alternative approach, to compare\nfunctionality and user experience?It is a design flaw of SQL. The issue we talk about is the generic property of SQL, and then you cannot fix it. I thought about possibility to introduce dedicated function svalue(regvariable) returns any - with planner support and possibility to force usage of this function. Another possibility is using some simple dedicated operator (syntax) for force using of variablesso theoretically this can looks like:set strict_usage_of_session_variables to on;SELECT * FROM tab WHERE a = svalue('myvar.var');or SELECT * FROM tab WHERE a = @ myvar.var;This can be really safe. Personally It is not my cup of tea, but I can live it (and this mode can be default). Theoretically we can limit usage of variables just for PL/pgSQL. It can reduce risks too, but it breaks usage variables for parametrization of DO blocks (what is my primary motivation), but it can be good enough to support migration from PL/SQL.another possibility can be disable / enable usage of session variables on session levellike set enable_session_variable to on/offso when the application doesn't use session variables, and then session variables can be disabled, but the user can enable it just for self for self session. Then the risk of unwanted usage of session variables can be zero. This is similar to discussion about login triggers. This mechanism can be used for using session variables only in PL too.   \n\nIn the meantime I'm afraid I have to withdraw \"Ready for committer\"\nstatus, sorry. I've clearly underestimated the importance of variables\nshadowing, thanks Alvaro and Peter for pointing out some dangerous\ncases. I still believe though that the majority of the patch is in a\ngood shape and the question about variables shadowing is the only thing\nthat keeps it from moving forward.I understand. I'll try to recapitulate my objections against proposed designsa) using syntax like MS - DECLARE command and '@@' prefix - it is dynamic, so there is not possibility of static check. It is not joined with schema, so there are possible collisions between variables and and the end the variables are named like @@mypackage_myvar - so some custom naming convention is necessary too. There is not possibility to set access rights.b) using variables like MySQL - first usage define it, and access by '@' prefix. It is simple, but without possibility of static check. There is not possibility to set access rights.c) using variables with necessity to define it in FROM clause. It is safe, but it can be less readable, when you use more variables, and it is not too readable, and user friendly, because you need to write FROM. And can be messy, because you usually will use variables in queries, and it is introduce not relations into FROM clause. But I can imagine this mode as alternative syntax, but it is very unfriendly and not intuitive (I think). More probably it doesn't fast execution in simple expression execution mode.d) my proposal - there is possibility of collisions, but consistent with naming of database objects, allows set of access rights, allows static analyze, consistent with PL/pgSQL and similar to PL/pgSQL.There is not any other possibility. Any time this is war between be user friendly, be readable, be correctly - but there is not perfect solution, because just SQL is not perfect. Almost all mentioned objections against proposed variables are valid just for tables and columns.RegardsPavel", "msg_date": "Fri, 24 May 2024 15:20:53 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nst 22. 5. 2024 v 19:25 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Peter Eisentraut <peter@eisentraut.org> writes:\n> > On 18.05.24 13:29, Alvaro Herrera wrote:\n> >> I want to note that when we discussed this patch series at the dev\n> >> meeting in FOSDEM, a sort-of conclusion was reached that we didn't want\n> >> schema variables at all because of the fact that creating a variable\n> >> would potentially change the meaning of queries by shadowing table\n> >> columns. But this turns out to be incorrect: it's_variables_ that are\n> >> shadowed by table columns, not the other way around.\n>\n> > But that's still bad, because seemingly unrelated schema changes can\n> > make variables appear and disappear. For example, if you have\n> > SELECT a, b FROM table1\n> > and then you drop column b, maybe the above query continues to work\n> > because there is also a variable b.\n>\n> Yeah, that seems pretty dangerous. Could we make it safe enough\n> by requiring some qualification on variable names? That is, if\n> you mean b to be a variable, then you must write something like\n>\n> SELECT a, pg_variables.b FROM table1\n>\n> This is still ambiguous if you use \"pg_variables\" as a table alias in\n> the query, but the alias would win so the query still means what it\n> meant before. Also, table aliases (as opposed to actual table names)\n> don't change readily, so I don't think there's much risk of the query\n> suddenly meaning something different than it did yesterday.\n>\n\nwe can introduce special safe mode started by\n\nset enable_direct_variable_read to off;\n\nand allowing access to variables only by usage dedicated function\n(supported by parser) named like variable or pg_variable\n\nso it can looks like\n\nselect a, pg_variable(myschema.myvar) from table\n\nIn this mode, the variables never are readable directly, so there is no\nrisk of collision and issue mentioned by Peter. And the argument of the\npg_variable pseudo function can be only variable, so risk of possible\ncollision can be reduced too. The pseudo function pg_variable can be used\nin less restrictive mode too, when the user can explicitly show usage of\nthe variable.\n\nTom's proposal is already almost supported now. The user can use a\ndedicated schema without assigning this schema to search_path. Then a\nqualified name should be required.\n\nCan this design be the correct answer for mentioned objections?\n\n Regards\n\nPavel\n\n\n\n> regards, tom lane\n>\n\nHist 22. 5. 2024 v 19:25 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Peter Eisentraut <peter@eisentraut.org> writes:\n> On 18.05.24 13:29, Alvaro Herrera wrote:\n>> I want to note that when we discussed this patch series at the dev\n>> meeting in FOSDEM, a sort-of conclusion was reached that we didn't want\n>> schema variables at all because of the fact that creating a variable\n>> would potentially change the meaning of queries by shadowing table\n>> columns.  But this turns out to be incorrect: it's_variables_  that are\n>> shadowed by table columns, not the other way around.\n\n> But that's still bad, because seemingly unrelated schema changes can \n> make variables appear and disappear.  For example, if you have\n>       SELECT a, b FROM table1\n> and then you drop column b, maybe the above query continues to work \n> because there is also a variable b.\n\nYeah, that seems pretty dangerous.  Could we make it safe enough\nby requiring some qualification on variable names?  That is, if\nyou mean b to be a variable, then you must write something like\n\n        SELECT a, pg_variables.b FROM table1\n\nThis is still ambiguous if you use \"pg_variables\" as a table alias in\nthe query, but the alias would win so the query still means what it\nmeant before.  Also, table aliases (as opposed to actual table names)\ndon't change readily, so I don't think there's much risk of the query\nsuddenly meaning something different than it did yesterday.we can introduce special safe mode started byset enable_direct_variable_read to off;and allowing access to variables only by usage dedicated function (supported by parser) named like variable or pg_variableso it can looks likeselect a, pg_variable(myschema.myvar) from table  In this mode, the variables never are readable directly, so there is no risk of collision and issue mentioned by Peter. And the argument of the pg_variable pseudo function can be only variable, so risk of possible collision can be reduced too. The pseudo function pg_variable can be used in less restrictive mode too, when the user can explicitly show usage of the variable.  Tom's proposal is already almost supported now. The user can use a dedicated schema without assigning this schema to search_path. Then a qualified name should be required.Can this design be the correct answer for mentioned objections? RegardsPavel\n\n                        regards, tom lane", "msg_date": "Sat, 25 May 2024 03:16:22 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> we can introduce special safe mode started by\n> set enable_direct_variable_read to off;\n> and allowing access to variables only by usage dedicated function\n> (supported by parser) named like variable or pg_variable\n\nDidn't we learn twenty years ago that GUCs that change query\nsemantics are an awful idea? Pick a single access method\nfor these things and stick to it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 May 2024 21:29:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "so 25. 5. 2024 v 3:29 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > we can introduce special safe mode started by\n> > set enable_direct_variable_read to off;\n> > and allowing access to variables only by usage dedicated function\n> > (supported by parser) named like variable or pg_variable\n>\n> Didn't we learn twenty years ago that GUCs that change query\n> semantics are an awful idea? Pick a single access method\n> for these things and stick to it.\n>\n\nI don't think the proposed GUC exactly changes query semantics - it is\nequivalent of plpgsql options: plpgsql_extra_xxxx or #variable_conflict. It\nallows us to identify broken queries. And for tools that generates queries\nis not problem to wrap reading variable by special pseudo function. The\ncode where pseudo function will be used should to work with active or\ninactive strict mode (related to possibility to use variables).\n\nSure there is more possibilities, but I don't want to lost the possibility\nto write code like\n\nCREATE TEMP VARIABLE _x;\n\nLET _x = 'hello';\n\nDO $$\nBEGIN\n RAISE NOTICE '%', _x;\nEND;\n$$;\n\nSo I am searching for a way to do it safely, but still intuitive and user\nfriendly.\n\nRegards\n\nPavel\n\n\n\n>\n> regards, tom lane\n>\n\nso 25. 5. 2024 v 3:29 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> we can introduce special safe mode started by\n> set enable_direct_variable_read to off;\n> and allowing access to variables only by usage dedicated function\n> (supported by parser) named like variable or pg_variable\n\nDidn't we learn twenty years ago that GUCs that change query\nsemantics are an awful idea?  Pick a single access method\nfor these things and stick to it.I don't think the proposed GUC exactly changes query semantics - it is equivalent of plpgsql options: plpgsql_extra_xxxx or #variable_conflict. It allows us to identify broken queries. And for tools that generates queries is not problem to wrap reading variable by special pseudo function. The code where pseudo function will be used should to work with active or inactive strict mode (related to possibility to use variables).Sure there is more possibilities, but I don't want to lost the possibility to write code likeCREATE TEMP VARIABLE _x;LET _x = 'hello';DO $$BEGIN  RAISE NOTICE '%', _x;END;$$;So I am searching for a way to do it safely, but still intuitive and user friendly.RegardsPavel \n\n                        regards, tom lane", "msg_date": "Sat, 25 May 2024 07:10:50 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Pavel Stehule:\n> Sure there is more possibilities, but I don't want to lost the \n> possibility to write code like\n> \n> CREATE TEMP VARIABLE _x;\n> \n> LET _x = 'hello';\n> \n> DO $$\n> BEGIN\n>   RAISE NOTICE '%', _x;\n> END;\n> $$;\n> \n> So I am searching for a way to do it safely, but still intuitive and \n> user friendly.\n\nMaybe a middle-way between this and Alvaro's proposal could be:\n\nWhenever you have a FROM clause, a variable must be added to it to be \naccessible. When you don't have a FROM clause, you can access it directly.\n\nThis would make the following work:\n\nRAISE NOTICE '%', _x;\n\nSELECT _x;\n\nSELECT tbl.*, _x FROM tbl, _x;\n\nSELECT tbl.*, (SELECT _x) FROM tbl, _x;\n\nSELECT tbl.*, (SELECT _x FROM _x) FROM tbl;\n\n\nBut the following would be an error:\n\nSELECT tbl.*, _x FROM tbl;\n\nSELECT tbl.*, (SELECT _x) FROM tbl;\n\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Sat, 25 May 2024 10:24:41 +0200", "msg_from": "walther@technowledgy.de", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "so 25. 5. 2024 v 10:24 odesílatel <walther@technowledgy.de> napsal:\n\n> Pavel Stehule:\n> > Sure there is more possibilities, but I don't want to lost the\n> > possibility to write code like\n> >\n> > CREATE TEMP VARIABLE _x;\n> >\n> > LET _x = 'hello';\n> >\n> > DO $$\n> > BEGIN\n> > RAISE NOTICE '%', _x;\n> > END;\n> > $$;\n> >\n> > So I am searching for a way to do it safely, but still intuitive and\n> > user friendly.\n>\n> Maybe a middle-way between this and Alvaro's proposal could be:\n>\n> Whenever you have a FROM clause, a variable must be added to it to be\n> accessible. When you don't have a FROM clause, you can access it directly.\n>\n> This would make the following work:\n>\n> RAISE NOTICE '%', _x;\n>\n> SELECT _x;\n>\n> SELECT tbl.*, _x FROM tbl, _x;\n>\n> SELECT tbl.*, (SELECT _x) FROM tbl, _x;\n>\n> SELECT tbl.*, (SELECT _x FROM _x) FROM tbl;\n>\n>\n> But the following would be an error:\n>\n> SELECT tbl.*, _x FROM tbl;\n>\n> SELECT tbl.*, (SELECT _x) FROM tbl;\n>\n>\nIt looks odd - It is not intuitive, it introduces new inconsistency inside\nPostgres, or with solutions in other databases. No other database has a\nsimilar rule, so users coming from Oracle, Db2, or MSSQL, Firebird will be\nconfused. Users that use PL/pgSQL will be confused.\n\nRegards\n\nPavel\n\n\n>\n> Best,\n>\n> Wolfgang\n>\n\nso 25. 5. 2024 v 10:24 odesílatel <walther@technowledgy.de> napsal:Pavel Stehule:\n> Sure there is more possibilities, but I don't want to lost the \n> possibility to write code like\n> \n> CREATE TEMP VARIABLE _x;\n> \n> LET _x = 'hello';\n> \n> DO $$\n> BEGIN\n>    RAISE NOTICE '%', _x;\n> END;\n> $$;\n> \n> So I am searching for a way to do it safely, but still intuitive and \n> user friendly.\n\nMaybe a middle-way between this and Alvaro's proposal could be:\n\nWhenever you have a FROM clause, a variable must be added to it to be \naccessible.  When you don't have a FROM clause, you can access it directly.\n\nThis would make the following work:\n\nRAISE NOTICE '%', _x;\n\nSELECT _x;\n\nSELECT tbl.*, _x FROM tbl, _x;\n\nSELECT tbl.*, (SELECT _x) FROM tbl, _x;\n\nSELECT tbl.*, (SELECT _x FROM _x) FROM tbl;\n\n\nBut the following would be an error:\n\nSELECT tbl.*, _x FROM tbl;\n\nSELECT tbl.*, (SELECT _x) FROM tbl;\nIt looks odd - It is not intuitive, it introduces new inconsistency inside Postgres, or with solutions in other databases. No other database has a similar rule, so users coming from Oracle, Db2, or MSSQL, Firebird will be confused. Users that use PL/pgSQL will be confused.RegardsPavel \n\nBest,\n\nWolfgang", "msg_date": "Sat, 25 May 2024 12:50:45 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nso 25. 5. 2024 v 3:29 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > we can introduce special safe mode started by\n> > set enable_direct_variable_read to off;\n> > and allowing access to variables only by usage dedicated function\n> > (supported by parser) named like variable or pg_variable\n>\n> Didn't we learn twenty years ago that GUCs that change query\n> semantics are an awful idea? Pick a single access method\n> for these things and stick to it.\n>\n\nI propose another variants. First we can introduce pseudo function VAR( ).\nThe argument should be session variables. The name of this function can be\npgvar, globvar, ... We can talk about good name, it should not be too long,\nbut it is not important now. The VAR() function will be pseudo function\nlike COALESCE, so we can easily to set correct result type.\n\nI see possible variants\n\n1. for any read of session variable, the VAR function should be used\n(everywhere), the write is not problem, there is not risk of collisions.\nWhen VAR() function will be required everywhere, then the name should be\nshorter.\n\nSELECT * FROM tab WHERE id = VAR(stehule.myvar);\nSELECT VAR(okbob.myvar);\n\n2. the usage of VAR() function should be required, when query has FROM\nclause, and then there is in risk of collisions. Without it, then the VAR()\nfunction can be optional (it is modification of Wolfgang or Alvaro\nproposals). I prefer this syntax before mentioning in FROM clause, just I\nthink so it is less confusing, and FROM clause should be used for\nrelations, and not for variables.\n\nSELECT * FROM tab WHERE id = VAR(okbob.myvar)\nSELECT okbob.myvar;\n\n3. Outside PL the VAR() function will be required, inside PL the VAR\nfunction can be optional (and we can throw an exception) when we found\ncollision like now\n\nWhat do you think about this proposal? And if you can accept it, what\nversion?\n\nI think so implementation of any proposed variant should be easy. I can add\nextra check to plpgsql_check if the argument of VAR() function is in\npossible collision with other identifiers in query, but for proposed\nvariants it is just in nice to have category\n\nRegards\n\nPavel\n\n\n\n>\n> regards, tom lane\n>\n\nHiso 25. 5. 2024 v 3:29 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> we can introduce special safe mode started by\n> set enable_direct_variable_read to off;\n> and allowing access to variables only by usage dedicated function\n> (supported by parser) named like variable or pg_variable\n\nDidn't we learn twenty years ago that GUCs that change query\nsemantics are an awful idea?  Pick a single access method\nfor these things and stick to it.I propose another variants. First we can introduce pseudo function VAR( ). The argument should be session variables. The name of this function can be pgvar, globvar, ... We can talk about good name, it should not be too long, but it is not important now. The VAR() function will be pseudo function like COALESCE, so we can easily to set correct result type.I see possible variants1. for any read of session variable, the VAR function should be used (everywhere), the write is not problem, there is not risk of collisions. When VAR() function will be required everywhere, then the name should be shorter. SELECT * FROM tab WHERE id = VAR(stehule.myvar);SELECT VAR(okbob.myvar);2. the usage of VAR() function should be required, when query has FROM clause, and then there is in risk of collisions. Without it, then the VAR() function can be optional (it is modification of Wolfgang or Alvaro proposals). I prefer this syntax before mentioning in FROM clause, just I think so it is less confusing, and FROM clause should be used for relations, and not for variables. SELECT * FROM tab WHERE id = VAR(okbob.myvar)SELECT okbob.myvar;3. Outside PL the VAR() function will be required, inside PL the VAR function can be optional (and we can throw an exception) when we found collision like nowWhat do you think about this proposal? And if you can accept it, what version?I think so implementation of any proposed variant should be easy. I can add extra check to plpgsql_check if the argument of VAR() function is in possible collision with other identifiers in query, but for proposed variants it is just in nice to have categoryRegardsPavel \n\n                        regards, tom lane", "msg_date": "Tue, 28 May 2024 17:18:02 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "> On Tue, May 28, 2024 at 05:18:02PM GMT, Pavel Stehule wrote:\n>\n> I propose another variants. First we can introduce pseudo function VAR( ).\n> The argument should be session variables. The name of this function can be\n> pgvar, globvar, ... We can talk about good name, it should not be too long,\n> but it is not important now. The VAR() function will be pseudo function\n> like COALESCE, so we can easily to set correct result type.\n\nSo, the purpose of the function would be only to verify that the argument is a\nsession variable? That seems to be a very light payload,�which looks a bit\nawkward.\n\nOut of those options you propose I think the first one is the\nmost�straightforward one, but...\n\n> Alvaro Herrera:\n> > Perhaps the solution to all this is to avoid having the variables be\n> > implicitly present in the range table of all queries. �Instead, if you\n> > need a variable's value, then you need to add the variable to the FROM\n> > clause;\n\nThe more I think about this, the more I like this solution. Marking\nwhich�variables are available to the query this way, and using established\npatterns�for resolving ambiguity actually looks intuitive to me. Now I know,\nyou've got�strong objections:\n\n> I don't like this. Sure, this fixes the problem with collisions, but then\n> we cannot talk about variables. When some is used like a table, then it\n> should be a table. I can imagine memory tables, but it is a different type\n> of object. Table is relation, variable is just value. Variables should not\n> have columns, so using the same patterns for tables and variables has no\n> sense. Using the same catalog for variables and tables. Variables just hold\n> a value, and then you can use it inside a query without necessity to write\n> JOIN. Variables are not tables, and then it is not too confusing so they\n> are not transactional and don't support more rows, more columns.\n\nA FROM clause could contain a function returning a single value, nobody\nfinds�it confusing. And at least to me it's not much different from having a\nsession�variable as well, what do you think?\n\n> c) using variables with necessity to define it in FROM clause. It is safe,\n> but it can be less readable, when you use more variables, and it is not too\n> readable, and user friendly, because you need to write FROM. And can be\n> messy, because you usually will use variables in queries, and it is\n> introduce not relations into FROM clause. But I can imagine this mode as\n> alternative syntax, but it is very unfriendly and not intuitive (I think).\n\nThe proposal from Wolfgang to have a short-cut and not add FROM in case�there\nis no danger of ambiguity seems to resolve that.\n\n> More probably it doesn't fast execution in simple expression execution mode.\n\nCould you elaborate more, what do you mean by that? If the performance\noverhead�is not prohibitive (which I would expect is the case), having better\nUX for a�new feature usually beats having better performance.\n\n> It looks odd - It is not intuitive, it introduces new inconsistency inside\n> Postgres, or with solutions in other databases. No other database has a\n> similar rule, so users coming from Oracle, Db2, or MSSQL, Firebird will be\n> confused. Users that use PL/pgSQL will be confused.\n\nSession variables are not part of the SQL standard, and maintaining\nconsistency�with other databases is a questionable goal. Since it's a new\nfeature, I'm not sure�what you mean by inconsistency inside Postgres itself.\n\nI see that the main driving case behind this patch is to help with\nmigrating�from other databases that do have session variables. Going with\nvariables in�FROM clause, will not make a migration much harder -- some of the\nqueries would�have to modify the FROM part, and that's it, right? I could\nimagine it would be�even easier than adding VAR() everywhere.\n\n\n", "msg_date": "Fri, 31 May 2024 11:46:43 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "pá 31. 5. 2024 v 11:46 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\nnapsal:\n\n> > On Tue, May 28, 2024 at 05:18:02PM GMT, Pavel Stehule wrote:\n> >\n> > I propose another variants. First we can introduce pseudo function VAR(\n> ).\n> > The argument should be session variables. The name of this function can\n> be\n> > pgvar, globvar, ... We can talk about good name, it should not be too\n> long,\n> > but it is not important now. The VAR() function will be pseudo function\n> > like COALESCE, so we can easily to set correct result type.\n>\n> So, the purpose of the function would be only to verify that the argument\n> is a\n> session variable? That seems to be a very light payload, which looks a bit\n> awkward.\n>\n\nno, it just reduces catalog searching to variables. So with using this\nfunction, then there is no possibility of collision between variables and\nother objects. The argument can be only variable and nothing else. So then\nthe conflict is not possible. When somebody tries to specify a table or\ncolumn, then it fails, because this object will not be detected. So inside\nthis function, the tables and columns cannot to shading variables, and\nvariables cannot be replaced by columns.\n\nSo the proposed function is not just assert, it is designed like a catalog\nfilter.\n\n\n> Out of those options you propose I think the first one is the\n> most straightforward one, but...\n>\n> > Alvaro Herrera:\n> > > Perhaps the solution to all this is to avoid having the variables be\n> > > implicitly present in the range table of all queries. Instead, if you\n> > > need a variable's value, then you need to add the variable to the FROM\n> > > clause;\n>\n> The more I think about this, the more I like this solution. Marking\n> which variables are available to the query this way, and using established\n> patterns for resolving ambiguity actually looks intuitive to me. Now I\n> know,\n> you've got strong objections:\n>\n\nI still don't like this - mainly from two reasons\n\n1. it doesn't look user friendly - you need to maintain two different\nplaces in one query for one object. I can imagine usage there in the case\nof composite variables with unpacking (and then it can be consistent with\nothers). I can imagine to use optional usage of variables there for the\npossibility of realiasing - like functions - and if we should support it,\nthen with unpacking of composite values.\n\n(2024-05-31 12:33:57) postgres=# create type t as (a int, b int);\nCREATE TYPE\n(2024-05-31 12:35:26) postgres=# create function fx() returns t as $$\nselect 1, 2 $$ language sql;\nCREATE FUNCTION\n(2024-05-31 12:35:44) postgres=# select fx();\n┌───────┐\n│ fx │\n╞═══════╡\n│ (1,2) │\n└───────┘\n(1 row)\n\n(2024-05-31 12:35:47) postgres=# select * from fx();\n┌───┬───┐\n│ a │ b │\n╞═══╪═══╡\n│ 1 │ 2 │\n└───┴───┘\n(1 row)\n\n2. But my main argument is, it is not really safe - it solves Peter's use\ncase, but if I use a reverse example of Peter's case, I still have a\nproblem.\n\nI can have a variable x, and then I can write query like `SELECT x FROM x`;\n\nbut if somebody creates table x(x int), then the query `SELECT x FROM x`\nwill be correct, but it is surely something else. So the requirement of the\nusage variable inside FROM clause doesn't help. It doesn't work.\n\n\n\n\n\n\n\n> > I don't like this. Sure, this fixes the problem with collisions, but then\n> > we cannot talk about variables. When some is used like a table, then it\n> > should be a table. I can imagine memory tables, but it is a different\n> type\n> > of object. Table is relation, variable is just value. Variables should\n> not\n> > have columns, so using the same patterns for tables and variables has no\n> > sense. Using the same catalog for variables and tables. Variables just\n> hold\n> > a value, and then you can use it inside a query without necessity to\n> write\n> > JOIN. Variables are not tables, and then it is not too confusing so they\n> > are not transactional and don't support more rows, more columns.\n>\n> A FROM clause could contain a function returning a single value, nobody\n> finds it confusing. And at least to me it's not much different from having\n> a\n> session variable as well, what do you think?\n>\n\nbut there is a difference when function returns composite, and when not -\nif I use function in FROM clause, I'll get unpacked columns, when I use\nfunction in columns, then I get composite.\n\nThe usage variable in FROM clause can have sense in similar princip like\nfunctions - for possibility to use alias in same level of query and\npossibility to use one common syntax for composite unpacking. But it\ndoesn't help with safety against collisions.\n\n\n>\n> > c) using variables with necessity to define it in FROM clause. It is\n> safe,\n> > but it can be less readable, when you use more variables, and it is not\n> too\n> > readable, and user friendly, because you need to write FROM. And can be\n> > messy, because you usually will use variables in queries, and it is\n> > introduce not relations into FROM clause. But I can imagine this mode as\n> > alternative syntax, but it is very unfriendly and not intuitive (I\n> think).\n>\n> The proposal from Wolfgang to have a short-cut and not add FROM in\n> case there\n> is no danger of ambiguity seems to resolve that.\n>\n> > More probably it doesn't fast execution in simple expression execution\n> mode.\n>\n> Could you elaborate more, what do you mean by that? If the performance\n> overhead is not prohibitive (which I would expect is the case), having\n> better\n> UX for a new feature usually beats having better performance.\n>\n\nPLpgSQL has a special mode for faster expression execution. One\nprerequisite is not using FROM clause.\n\n\n> > It looks odd - It is not intuitive, it introduces new inconsistency\n> inside\n> > Postgres, or with solutions in other databases. No other database has a\n> > similar rule, so users coming from Oracle, Db2, or MSSQL, Firebird will\n> be\n> > confused. Users that use PL/pgSQL will be confused.\n>\n> Session variables are not part of the SQL standard, and maintaining\n> consistency with other databases is a questionable goal. Since it's a new\n> feature, I'm not sure what you mean by inconsistency inside Postgres\n> itself.\n>\n> I see that the main driving case behind this patch is to help with\n> migrating from other databases that do have session variables. Going with\n> variables in FROM clause, will not make a migration much harder -- some of\n> the\n> queries would have to modify the FROM part, and that's it, right? I could\n> imagine it would be even easier than adding VAR() everywhere.\n>\n\nI don't think - VAR(x) instead x is just a simple replacement - searching\nrelated FROM clauses is much more complex work.\n\nand if we talk about safety against collisions, then FROM clause doesn't\nhelp. Moreover, this safety is not guaranteed today because we have a\nsearch patch and we support unqualified identifiers.\n\nRegards\n\nPavel\n\npá 31. 5. 2024 v 11:46 odesílatel Dmitry Dolgov <9erthalion6@gmail.com> napsal:> On Tue, May 28, 2024 at 05:18:02PM GMT, Pavel Stehule wrote:\n>\n> I propose another variants. First we can introduce pseudo function VAR( ).\n> The argument should be session variables. The name of this function can be\n> pgvar, globvar, ... We can talk about good name, it should not be too long,\n> but it is not important now. The VAR() function will be pseudo function\n> like COALESCE, so we can easily to set correct result type.\n\nSo, the purpose of the function would be only to verify that the argument is a\nsession variable? That seems to be a very light payload, which looks a bit\nawkward.no, it just reduces catalog searching to variables. So with using this function, then there is no possibility of collision between variables and other objects. The argument can be only variable and nothing else. So then the conflict is not possible. When somebody tries to specify a table or column, then it fails, because this object will not be detected. So inside this function, the tables and columns cannot to shading variables, and variables cannot be replaced by columns.So the proposed function is not just assert, it is designed like a catalog filter.\n\nOut of those options you propose I think the first one is the\nmost straightforward one, but...\n\n> Alvaro Herrera:\n> > Perhaps the solution to all this is to avoid having the variables be\n> > implicitly present in the range table of all queries.  Instead, if you\n> > need a variable's value, then you need to add the variable to the FROM\n> > clause;\n\nThe more I think about this, the more I like this solution. Marking\nwhich variables are available to the query this way, and using established\npatterns for resolving ambiguity actually looks intuitive to me. Now I know,\nyou've got strong objections:I still don't like this - mainly from two reasons1. it doesn't look user friendly - you need to maintain two different places in one query for one object.  I can imagine usage there in the case of composite variables with unpacking (and then it can be consistent with others). I can imagine to use optional usage of variables there for the possibility of realiasing - like functions - and if we should support it, then with unpacking of composite values.(2024-05-31 12:33:57) postgres=# create type t as (a int, b int);CREATE TYPE(2024-05-31 12:35:26) postgres=# create function fx() returns t as $$ select 1, 2 $$ language sql;CREATE FUNCTION(2024-05-31 12:35:44) postgres=# select fx();┌───────┐│  fx   │╞═══════╡│ (1,2) │└───────┘(1 row)(2024-05-31 12:35:47) postgres=# select * from fx();┌───┬───┐│ a │ b │╞═══╪═══╡│ 1 │ 2 │└───┴───┘(1 row)2. But my main argument is, it is not really safe - it solves Peter's use case, but if I use a reverse example of Peter's case, I still have a problem.I can have a variable x, and then I can write query like `SELECT x FROM x`;but if somebody creates table x(x int), then the query `SELECT x FROM x` will be correct, but it is surely something else. So the requirement of the usage variable inside FROM clause doesn't help. It doesn't work. \n\n> I don't like this. Sure, this fixes the problem with collisions, but then\n> we cannot talk about variables. When some is used like a table, then it\n> should be a table. I can imagine memory tables, but it is a different type\n> of object. Table is relation, variable is just value. Variables should not\n> have columns, so using the same patterns for tables and variables has no\n> sense. Using the same catalog for variables and tables. Variables just hold\n> a value, and then you can use it inside a query without necessity to write\n> JOIN. Variables are not tables, and then it is not too confusing so they\n> are not transactional and don't support more rows, more columns.\n\nA FROM clause could contain a function returning a single value, nobody\nfinds it confusing. And at least to me it's not much different from having a\nsession variable as well, what do you think?but there is a difference when function returns composite, and when not - if I use function in FROM clause, I'll get unpacked columns, when I use function in columns, then I get composite.The usage variable in FROM clause can have sense in similar princip like functions - for possibility to use alias in same level of query and possibility to use one common syntax for composite unpacking. But it doesn't help with safety against collisions.  \n\n> c) using variables with necessity to define it in FROM clause. It is safe,\n> but it can be less readable, when you use more variables, and it is not too\n> readable, and user friendly, because you need to write FROM. And can be\n> messy, because you usually will use variables in queries, and it is\n> introduce not relations into FROM clause. But I can imagine this mode as\n> alternative syntax, but it is very unfriendly and not intuitive (I think).\n\nThe proposal from Wolfgang to have a short-cut and not add FROM in case there\nis no danger of ambiguity seems to resolve that.\n\n> More probably it doesn't fast execution in simple expression execution mode.\n\nCould you elaborate more, what do you mean by that? If the performance\noverhead is not prohibitive (which I would expect is the case), having better\nUX for a new feature usually beats having better performance.PLpgSQL has a special mode for faster expression execution. One prerequisite is not using FROM clause.\n\n> It looks odd - It is not intuitive, it introduces new inconsistency inside\n> Postgres, or with solutions in other databases. No other database has a\n> similar rule, so users coming from Oracle, Db2, or MSSQL, Firebird will be\n> confused. Users that use PL/pgSQL will be confused.\n\nSession variables are not part of the SQL standard, and maintaining\nconsistency with other databases is a questionable goal. Since it's a new\nfeature, I'm not sure what you mean by inconsistency inside Postgres itself.\n\nI see that the main driving case behind this patch is to help with\nmigrating from other databases that do have session variables. Going with\nvariables in FROM clause, will not make a migration much harder -- some of the\nqueries would have to modify the FROM part, and that's it, right? I could\nimagine it would be even easier than adding VAR() everywhere.I don't think - VAR(x) instead x is just a simple replacement - searching related FROM clauses is much more complex work.and if we talk about safety against collisions,  then FROM clause doesn't help. Moreover, this safety is not guaranteed today because we have a search patch and we support unqualified identifiers.RegardsPavel", "msg_date": "Fri, 31 May 2024 12:54:03 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Pavel Stehule:\n> 2. But my main argument is, it is not really safe - it solves Peter's \n> use case, but if I use a reverse example of Peter's case, I still have a \n> problem.\n> \n> I can have a variable x, and then I can write query like `SELECT x FROM x`;\n> \n> but if somebody creates table x(x int), then the query `SELECT x FROM x` \n> will be correct, but it is surely something else. So the requirement of \n> the usage variable inside FROM clause doesn't help. It doesn't work.\n\nBut in this case you could make variables and tables share the same \nnamespace, i.e. forbid creating a variable with the same name as an \nalready existing table.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Fri, 31 May 2024 13:10:43 +0200", "msg_from": "Wolfgang Walther <walther@technowledgy.de>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "pá 31. 5. 2024 v 13:10 odesílatel Wolfgang Walther <walther@technowledgy.de>\nnapsal:\n\n> Pavel Stehule:\n> > 2. But my main argument is, it is not really safe - it solves Peter's\n> > use case, but if I use a reverse example of Peter's case, I still have a\n> > problem.\n> >\n> > I can have a variable x, and then I can write query like `SELECT x FROM\n> x`;\n> >\n> > but if somebody creates table x(x int), then the query `SELECT x FROM x`\n> > will be correct, but it is surely something else. So the requirement of\n> > the usage variable inside FROM clause doesn't help. It doesn't work.\n>\n> But in this case you could make variables and tables share the same\n> namespace, i.e. forbid creating a variable with the same name as an\n> already existing table.\n>\n\nIt helps, but not on 100% - there is a search path\n\n\n\n>\n> Best,\n>\n> Wolfgang\n>\n\npá 31. 5. 2024 v 13:10 odesílatel Wolfgang Walther <walther@technowledgy.de> napsal:Pavel Stehule:\n> 2. But my main argument is, it is not really safe - it solves Peter's \n> use case, but if I use a reverse example of Peter's case, I still have a \n> problem.\n> \n> I can have a variable x, and then I can write query like `SELECT x FROM x`;\n> \n> but if somebody creates table x(x int), then the query `SELECT x FROM x` \n> will be correct, but it is surely something else. So the requirement of \n> the usage variable inside FROM clause doesn't help. It doesn't work.\n\nBut in this case you could make variables and tables share the same \nnamespace, i.e. forbid creating a variable with the same name as an \nalready existing table.It helps, but not on 100% - there is a search path \n\nBest,\n\nWolfgang", "msg_date": "Fri, 31 May 2024 13:14:19 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Pavel Stehule:\n> But in this case you could make variables and tables share the same\n> namespace, i.e. forbid creating a variable with the same name as an\n> already existing table.\n> \n> \n> It helps, but not on 100% - there is a search path\n\nI think we can ignore the search_path for this discussion. That's not a \nproblem of variables vs tables, but just a search path related problem. \nIt is exactly the same thing right now, when you create a new table x(x) \nin a schema which happens to be earlier in your search path.\n\nThe objection to the proposed approach for variables was that it would \nintroduce *new* ambiguities, which Alvaro's suggestion avoids.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Fri, 31 May 2024 13:37:35 +0200", "msg_from": "Wolfgang Walther <walther@technowledgy.de>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "pá 31. 5. 2024 v 13:37 odesílatel Wolfgang Walther <walther@technowledgy.de>\nnapsal:\n\n> Pavel Stehule:\n> > But in this case you could make variables and tables share the same\n> > namespace, i.e. forbid creating a variable with the same name as an\n> > already existing table.\n> >\n> >\n> > It helps, but not on 100% - there is a search path\n>\n\n> I think we can ignore the search_path for this discussion. That's not a\n> problem of variables vs tables, but just a search path related problem.\n> It is exactly the same thing right now, when you create a new table x(x)\n> in a schema which happens to be earlier in your search path.\n>\n\nI don't think it is a valid argument - search_path is there, and we cannot\nignore it, because it allows just one case.\n\nAnd the need to use a variable in FROM clause introduces implicit unpacking\nor inconsistency with current work with composite's types, so I am more\nsure this way is not good.\n\n\n\n\n>\n> The objection to the proposed approach for variables was that it would\n> introduce *new* ambiguities, which Alvaro's suggestion avoids.\n>\n> Best,\n>\n> Wolfgang\n>\n\npá 31. 5. 2024 v 13:37 odesílatel Wolfgang Walther <walther@technowledgy.de> napsal:Pavel Stehule:\n>     But in this case you could make variables and tables share the same\n>     namespace, i.e. forbid creating a variable with the same name as an\n>     already existing table.\n> \n> \n> It helps, but not on 100% - there is a search path \n\nI think we can ignore the search_path for this discussion. That's not a \nproblem of variables vs tables, but just a search path related problem. \nIt is exactly the same thing right now, when you create a new table x(x) \nin a schema which happens to be earlier in your search path. I don't think it is a valid argument - search_path is there, and we cannot ignore it, because it allows just one case.And the need to use a variable in FROM clause introduces implicit unpacking or inconsistency with current work with composite's types, so I am more sure this way is not good. \n\nThe objection to the proposed approach for variables was that it would \nintroduce *new* ambiguities, which Alvaro's suggestion avoids.\n\nBest,\n\nWolfgang", "msg_date": "Fri, 31 May 2024 15:02:49 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "pá 31. 5. 2024 v 15:02 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> pá 31. 5. 2024 v 13:37 odesílatel Wolfgang Walther <\n> walther@technowledgy.de> napsal:\n>\n>> Pavel Stehule:\n>> > But in this case you could make variables and tables share the same\n>> > namespace, i.e. forbid creating a variable with the same name as an\n>> > already existing table.\n>> >\n>> >\n>> > It helps, but not on 100% - there is a search path\n>>\n>\n>> I think we can ignore the search_path for this discussion. That's not a\n>> problem of variables vs tables, but just a search path related problem.\n>> It is exactly the same thing right now, when you create a new table x(x)\n>> in a schema which happens to be earlier in your search path.\n>>\n>\n> I don't think it is a valid argument - search_path is there, and we cannot\n> ignore it, because it allows just one case.\n>\n> And the need to use a variable in FROM clause introduces implicit\n> unpacking or inconsistency with current work with composite's types, so I\n> am more sure this way is not good.\n>\n\nThe session variables can be used in queries, but should be used in\nPL/pgSQL expressions, and then the mandatory usage in FROM clause will do\nlot of problems and unreadable code like\n\nDO $$\nBEGIN\n RAISE NOTICE '% %', (SELECT x FROM x), (SELECT a,b FROM y);\n\nEND\n$$\n\nThis requirement does variables unusable in PL\n\n\n\n>\n>\n>\n>>\n>> The objection to the proposed approach for variables was that it would\n>> introduce *new* ambiguities, which Alvaro's suggestion avoids.\n>>\n>> Best,\n>>\n>> Wolfgang\n>>\n>\n\npá 31. 5. 2024 v 15:02 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:pá 31. 5. 2024 v 13:37 odesílatel Wolfgang Walther <walther@technowledgy.de> napsal:Pavel Stehule:\n>     But in this case you could make variables and tables share the same\n>     namespace, i.e. forbid creating a variable with the same name as an\n>     already existing table.\n> \n> \n> It helps, but not on 100% - there is a search path \n\nI think we can ignore the search_path for this discussion. That's not a \nproblem of variables vs tables, but just a search path related problem. \nIt is exactly the same thing right now, when you create a new table x(x) \nin a schema which happens to be earlier in your search path. I don't think it is a valid argument - search_path is there, and we cannot ignore it, because it allows just one case.And the need to use a variable in FROM clause introduces implicit unpacking or inconsistency with current work with composite's types, so I am more sure this way is not good.The session variables can be used in queries, but should be used in PL/pgSQL expressions, and then the mandatory usage in FROM clause will do lot of problems and unreadable code likeDO $$BEGIN  RAISE NOTICE '% %', (SELECT x FROM x), (SELECT a,b FROM y);END$$This requirement does variables unusable in PL \n\nThe objection to the proposed approach for variables was that it would \nintroduce *new* ambiguities, which Alvaro's suggestion avoids.\n\nBest,\n\nWolfgang", "msg_date": "Fri, 31 May 2024 15:11:21 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Pavel Stehule:\n> The session variables can be used in queries, but should be used in \n> PL/pgSQL expressions, and then the mandatory usage in FROM clause will \n> do lot of problems and unreadable code like\n> \n> DO $$\n> BEGIN\n>   RAISE NOTICE '% %', (SELECT x FROM x), (SELECT a,b FROM y);\n> \n> END\n> $$\n> \n> This requirement does variables unusable in PL\n\nI already proposed earlier to only require listing them in FROM when \nthere is actually a related FROM.\n\nIn this case you could still write:\n\nRAISE NOTICE '% %', x, (SELECT a,b FROM y);\n\n(assuming only x is a variable here)\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Fri, 31 May 2024 15:29:29 +0200", "msg_from": "Wolfgang Walther <walther@technowledgy.de>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "pá 31. 5. 2024 v 15:29 odesílatel Wolfgang Walther <walther@technowledgy.de>\nnapsal:\n\n> Pavel Stehule:\n> > The session variables can be used in queries, but should be used in\n> > PL/pgSQL expressions, and then the mandatory usage in FROM clause will\n> > do lot of problems and unreadable code like\n> >\n> > DO $$\n> > BEGIN\n> > RAISE NOTICE '% %', (SELECT x FROM x), (SELECT a,b FROM y);\n> >\n> > END\n> > $$\n> >\n> > This requirement does variables unusable in PL\n>\n> I already proposed earlier to only require listing them in FROM when\n> there is actually a related FROM.\n>\n\nbut there is technical problem - plpgsql expression are internally SQL\nqueries. Isn't possible to cleanly to parse queries and expressions\ndifferently.\n\n\n\n>\n> In this case you could still write:\n>\n> RAISE NOTICE '% %', x, (SELECT a,b FROM y);\n>\n> (assuming only x is a variable here)\n>\n> Best,\n>\n> Wolfgang\n>\n\npá 31. 5. 2024 v 15:29 odesílatel Wolfgang Walther <walther@technowledgy.de> napsal:Pavel Stehule:\n> The session variables can be used in queries, but should be used in \n> PL/pgSQL expressions, and then the mandatory usage in FROM clause will \n> do lot of problems and unreadable code like\n> \n> DO $$\n> BEGIN\n>    RAISE NOTICE '% %', (SELECT x FROM x), (SELECT a,b FROM y);\n> \n> END\n> $$\n> \n> This requirement does variables unusable in PL\n\nI already proposed earlier to only require listing them in FROM when \nthere is actually a related FROM.but there is technical problem - plpgsql expression are internally SQL queries. Isn't possible to cleanly to parse queries and expressions differently. \n\nIn this case you could still write:\n\nRAISE NOTICE '% %', x, (SELECT a,b FROM y);\n\n(assuming only x is a variable here)\n\nBest,\n\nWolfgang", "msg_date": "Fri, 31 May 2024 15:40:29 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": ">\n>\n>\n>\n>> In this case you could still write:\n>>\n>> RAISE NOTICE '% %', x, (SELECT a,b FROM y);\n>>\n>> (assuming only x is a variable here)\n>>\n>\nno - y was a composite variable.\n\nWhen you write RAISE NOTICE '%', x, then PLpgSQL parser rewrite it to RAISE\nNOTICE '%', SELECT $1\n\nThere is no parser just for expressions.\n\n\n\n>\n>> Best,\n>>\n>> Wolfgang\n>>\n>\n\n\n\nIn this case you could still write:\n\nRAISE NOTICE '% %', x, (SELECT a,b FROM y);\n\n(assuming only x is a variable here)no - y was a composite variable. When you write RAISE NOTICE '%', x, then PLpgSQL parser rewrite it to RAISE NOTICE '%', SELECT $1 There is no parser just for expressions. \n\nBest,\n\nWolfgang", "msg_date": "Fri, 31 May 2024 15:42:44 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Pavel Stehule:\n> When you write RAISE NOTICE '%', x, then PLpgSQL parser rewrite it to \n> RAISE NOTICE '%', SELECT $1\n> \n> There is no parser just for expressions.\n\nThat's why my suggestion in [1] already made a difference between:\n\nSELECT var;\n\nand\n\nSELECT col, var FROM table, var;\n\nSo the \"only require variable-in-FROM if FROM is used\" should extend to \nthe SQL level.\n\nThat should be possible, right?\n\nBest,\n\nWolfgang\n\n[1]: \nhttps://www.postgresql.org/message-id/e7faf42f-62b8-47f4-af5c-cb8efa3e0e20%40technowledgy.de\n\n\n", "msg_date": "Fri, 31 May 2024 15:49:38 +0200", "msg_from": "Wolfgang Walther <walther@technowledgy.de>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "pá 31. 5. 2024 v 15:49 odesílatel Wolfgang Walther <walther@technowledgy.de>\nnapsal:\n\n> Pavel Stehule:\n> > When you write RAISE NOTICE '%', x, then PLpgSQL parser rewrite it to\n> > RAISE NOTICE '%', SELECT $1\n> >\n> > There is no parser just for expressions.\n>\n> That's why my suggestion in [1] already made a difference between:\n>\n> SELECT var;\n>\n> and\n>\n> SELECT col, var FROM table, var;\n>\n> So the \"only require variable-in-FROM if FROM is used\" should extend to\n> the SQL level.\n>\n> That should be possible, right?\n>\n\n1. you need to implement extra path - the data from FROM clause are\nprocessed differently than params - it is much more code (and current code\nshould to stay if you want to support it)\n\n2. current default behave is implicit unpacking of composites when are used\nin FROM clause. So it is problem when you want to use composite in query\nwithout unpacking\n\n3. when I'll support SELECT var and SELECT var FROM var together, then it\nwill raise a collision with self, that should be solved\n\n4. there is not any benefit if variables and tables doen't share catalog,\nbut session variables requires lsn number, and it can be problem to use it\nis table catalog\n\n5. identification when the variable needs or doesn't need FROM clause isn't\neasy\n\nthere can be lot of combinations like SELECT (SELECT var), c FROM tab or\nSELECT var, (SELECT c) FROM c and if c is variable, then FROM is not\nnecessary.\n\nIf somebody will write SELECT (SELECT var OFFSET 0) FROM ... then subselect\ncan know nothing about outer query - so it means minimally one check over\nall nodes\n\nIt is possible / but it is multiple more complex than current code (and I\nam not sure if store lns in pg_class is possible ever)\n\n6. I think so plpgsql case statement use multicolumn expression, so you can\nwrite\n\nCASE WHEN x = 1, (SELECT count(*) FROM tab) THEN ...\n\nIt is synthetic, but we are talking about what is possible.\n\nand although it looks correctly, and will work if x will be plpgsql\nvariable, then it will not work if x will be session variable\n\nand then you need to fix it like\n\nCASE WHEN (SELECT x=1 FROM x), (SELECT count(*) FROM tab) THEN\n\nso it is possible, but it is clean only in trivial cases, and can be pretty\nmessy\n\nPersonally, I cannot to imagine to explain to any user so following\n(proposed by you) behaviour is intuitive and friendly\n\nCREATE VARIABLE a as int;\nCREATE TABLE test(id int);\n\nSELECT a; --> ok\nSELECT * FROM test WHERE id = a; -- error message \"the column \"a\" doesn't\nexists\"\n\n\n\nBest,\n>\n> Wolfgang\n>\n> [1]:\n>\n> https://www.postgresql.org/message-id/e7faf42f-62b8-47f4-af5c-cb8efa3e0e20%40technowledgy.de\n>\n\npá 31. 5. 2024 v 15:49 odesílatel Wolfgang Walther <walther@technowledgy.de> napsal:Pavel Stehule:\n> When you write RAISE NOTICE '%', x, then PLpgSQL parser rewrite it to \n> RAISE NOTICE '%', SELECT $1\n> \n> There is no parser just for expressions.\n\nThat's why my suggestion in [1] already made a difference between:\n\nSELECT var;\n\nand\n\nSELECT col, var FROM table, var;\n\nSo the \"only require variable-in-FROM if FROM is used\" should extend to \nthe SQL level.\n\nThat should be possible, right?1. you need to implement extra path - the data from FROM clause are processed differently than params  - it is much more code (and current code should to stay if you want to support it)2. current default behave is implicit unpacking of composites when are used in FROM clause. So it is problem when you want to use composite in query without unpacking3. when I'll support SELECT var and SELECT var FROM var together, then it will raise a collision with self, that should be solved4. there is not any benefit if variables and tables doen't share catalog, but session variables requires lsn number, and it can be problem to use it is table catalog5. identification when the variable needs or doesn't need FROM clause isn't easythere can be lot of combinations like SELECT (SELECT var), c FROM tab  or SELECT var, (SELECT c) FROM c and if c is variable, then FROM is not necessary.If somebody will write SELECT (SELECT var OFFSET 0) FROM ... then subselect can know nothing about outer query - so it means minimally one check over all nodesIt is possible / but it is multiple more complex than current code (and I am not sure if store lns in pg_class is possible ever)6. I think so plpgsql case statement use multicolumn expression, so you can writeCASE WHEN x = 1, (SELECT count(*) FROM tab) THEN ... It is synthetic, but we are talking about what is possible.and although it looks correctly, and will work if x will be plpgsql variable, then it will not work if x will be session variableand then you need to fix it likeCASE WHEN (SELECT x=1 FROM x), (SELECT count(*) FROM tab) THENso it is possible, but it is clean only in trivial cases, and can be pretty messy Personally, I cannot to imagine to explain to any user so following (proposed by you) behaviour is intuitive and friendlyCREATE VARIABLE a as int;CREATE TABLE test(id int);SELECT a; --> okSELECT * FROM test WHERE id = a; -- error message \"the column \"a\" doesn't exists\"\nBest,\n\nWolfgang\n\n[1]: \nhttps://www.postgresql.org/message-id/e7faf42f-62b8-47f4-af5c-cb8efa3e0e20%40technowledgy.de", "msg_date": "Fri, 31 May 2024 16:33:56 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "On 25.05.24 12:50, Pavel Stehule wrote:\n> It looks odd - It is not intuitive, it introduces new inconsistency \n> inside Postgres, or with solutions in other databases. No other database \n> has a similar rule, so users coming from Oracle, Db2, or MSSQL, Firebird \n> will be confused. Users that use PL/pgSQL will be confused.\n\nDo you have a description of what those other systems do? Maybe you \nposted it already earlier?\n\n\n\n", "msg_date": "Sun, 2 Jun 2024 23:31:03 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "ne 2. 6. 2024 v 23:31 odesílatel Peter Eisentraut <peter@eisentraut.org>\nnapsal:\n>\n> On 25.05.24 12:50, Pavel Stehule wrote:\n> > It looks odd - It is not intuitive, it introduces new inconsistency\n> > inside Postgres, or with solutions in other databases. No other database\n> > has a similar rule, so users coming from Oracle, Db2, or MSSQL, Firebird\n> > will be confused. Users that use PL/pgSQL will be confused.\n>\n> Do you have a description of what those other systems do? Maybe you\n> posted it already earlier?\n>\n\nI checked today\n\n1. MySQL\n\nMySQL knows 3 types of variables\n\nglobal - the access syntax is @@varname - they are used like our GUC and\nonly buildin system variables are supported\n\nSET @@autocommit = off;\nSELECT @@autocommit;\n\nuser defined variables - the access syntax is @varname - the behaviour is\nsimilar to psql variables, but they are server side\n\nSET @x = 100;\nSELECT @x;\n\nlocal variables - only inside PL\n\nCREATE PROCEDURE p1()\nDECLARE x int;\nBEGIN\n SET x = 100;\n SELECT x;\nEND;\n\nvariables has higher priority than column (like old plpgsql)\n\n2. MSSQL\n\nglobal variables - the access syntax is @@varname, they are used like GUC\nand little bit more - some state informations are there like @@ERROR,\n@@ROWCOUNT or @@IDENTITY\n\nlocal variables - the access syntax is @varname, and should be declared\nbefore usage by DECLARE command. The scope is limited to batch or procedure\nor function, where DECLARE command was executed.\n\nDECLARE @TestVariable AS VARCHAR(100)\nSET @TestVariable = 'Think Green'\nGO\nPRINT @TestVariable\n\nThis script fails, because PRINT is executed in another batch. So I think\nso MSSQL doesn't support session variables\n\nThere are similar mechanisms like our custom GUC and usage current_setting\nand set_config functions. Generally, in this area is MSSQL very primitive\n\nEXEC sp_set_session_context 'user_id', 4;\nSELECT SESSION_CONTEXT(N'user_id');\n\n3. DB2\n\nThe \"user defined global variables\" are similar to my proposal. The\ndifferences are different access rights \"READ, WRITE\" x \"SELECT, UPDATE\".\nBecause PostgreSQL has SET command for GUC, I introduced LET command (DB2\nuses SET)\n\nVariables are visible in all sessions, but value is private per session.\nVariables are not transactional. The usage is wider than my proposal. Then\ncan be changed by commands SET, SELECT INTO or they can be used like OUT\nparameters of procedures. The search path (or some like that) is used for\nvariables too, but the variables has less priority than tables/columns.\n\nCREATE VARIABLE myCounter INT DEFAULT 01;\nSELECT EMPNO, LASTNAME, CASE WHEN myCounter = 1 THEN SALARY ELSE NULL END\nFROM EMPLOYEE WHERE WORKDEPT = ’A00’;\nSET myCounter = 29;\n\nThere are (I think) different kinds of variables - accessed by the function\nGETVARIABLE('name', 'default) - it looks very similar ro our GUC and\n`current_setting` function. These variables can be set by connection\nstring, are of varchar type and 10 values are allowed. Built-in session\nvariables (configuration) can be accessed by the function GETVARIABLE too.\n\nSQL stored procedures supports declared local variables like PL/pgSQL\n\n4. Firebird\n\nFirebird has something like our custom GUC. But it allow nested routines -\nso some functionality of session variables can be emulated with local\nvariable and nested routines (but outer variables can be used only in\nFirebird 5)\n\nThe variables are accessed by syntax :varname - like psql, but if I\nunderstand to diagrams, the char ':' is optional\n\n5. SQL/PSM\n\nStandard introduces a concept of modules that can be joined with schemas.\nThe variables are like PLpgSQL, but only local - the only temp tables can\nbe defined on module levels. These tables can be accessed only from\nroutines assigned to modules. Modules are declarative versions of our\nextensions (if I understand well, I didn't find any implementation). It\nallows you to overwrite the search patch for routines assigned in the\nmodule. Variables are not transactional, the priority - variables/columns\nis not specified.\n\n6. Oracle\n\nOracle PL/SQL allows the use of package variables. PL/SQL is +/- ADA\nlanguage - and package variables are \"global\" variables. They are not\ndirectly visible from SQL, but Oracle allows reduced syntax for functions\nwithout arguments, so you need to write a wrapper\n\nCREATE OR REPLACE PACKAGE my_package\nAS\n FUNCTION get_a RETURN NUMBER;\nEND my_package;\n/\n\nCREATE OR REPLACE PACKAGE BODY my_package\nAS\n a NUMBER(20);\n\n FUNCTION get_a\n RETURN NUMBER\n IS\n BEGIN\n RETURN a;\n END get_a;\nEND my_package;\n\nSELECT my_package.get_a FROM DUAL;\n\nInside SQL the higher priority has SQL, inside non SQL commands like CALL\nor some PL/SQL command, the higher priority has packages.\n\nThe Oracle allows both syntax for calling function with zero arguments so\n\nSELECT my_package.get_a FROM DUAL;\n\nor\n\nSELECT my_package.get_a() FROM DUAL;\n\nThen there is less risk reduction of collision. Package variables persist\nin session\n\nAnother possibility is using variables in SQL*Plus (looks like our psql\nvariables, with possibility to define type on server side)\n\nThe variable should be declared by command VARIABLE and can be accessed by\nsyntax :varname in session before usage (maybe this step is optional)\n\nVARIABLE bv_variable_name VARCHAR2(30)\n\nBEGIN\n :bv_variable_name := 'Some Value';\nEND;\n\nSELECT column_name\nFROM table_name\nWHERE column_name = :bv_variable_name;\n\nThis is something between MSSQL and MYSQL session variables - but\ninternally it is binding parameters - what I know, Postgres cannot set\nthese parameters as result of some pg operation.\n\nSQL*Plus is strange creature\n\n\nGenerally, the possible collision between variables and columns are solved\nby\n\na) special syntax - using prefix like @ or :\nb) dedicated functions\nc) variables has lower priority than columns\n\nYou can see, the RDBMS allows different types of session variables,\ndifferent implementations. Usually one system allows more implementation of\nsession variables. There is a possibility of emulation implementation\nbetween RDBMS, but security setting is possible only in Oracle or DB2.\n\nRegards\n\nPavel\n\nne 2. 6. 2024 v 23:31 odesílatel Peter Eisentraut <peter@eisentraut.org> napsal:>> On 25.05.24 12:50, Pavel Stehule wrote:> > It looks odd - It is not intuitive, it introduces new inconsistency> > inside Postgres, or with solutions in other databases. No other database> > has a similar rule, so users coming from Oracle, Db2, or MSSQL, Firebird> > will be confused. Users that use PL/pgSQL will be confused.>> Do you have a description of what those other systems do?  Maybe you> posted it already earlier?>I checked today1. MySQLMySQL knows 3 types of variablesglobal - the access syntax is @@varname - they are used like our GUC and only buildin system variables are supportedSET @@autocommit = off;SELECT @@autocommit;user defined variables - the access syntax is @varname - the behaviour is similar to psql variables, but they are server sideSET @x = 100;SELECT @x;local variables - only inside PLCREATE PROCEDURE p1()DECLARE x int;BEGIN  SET x = 100;  SELECT x;END;variables has higher priority than column (like old plpgsql)2. MSSQLglobal variables - the access syntax is @@varname, they are used like GUC and little bit more - some state informations are there like @@ERROR, @@ROWCOUNT or @@IDENTITYlocal variables - the access syntax is @varname, and should be declared before usage by DECLARE command. The scope is limited to batch or procedure or function, where DECLARE command was executed.DECLARE @TestVariable AS VARCHAR(100)SET @TestVariable = 'Think Green'GOPRINT @TestVariableThis script fails, because PRINT is executed in another batch. So I think so MSSQL doesn't support session variablesThere are similar mechanisms like our custom GUC and usage current_setting and set_config functions. Generally, in this area is MSSQL very primitiveEXEC sp_set_session_context 'user_id', 4;  SELECT SESSION_CONTEXT(N'user_id');3. DB2The \"user defined global variables\" are similar to my proposal. The differences are different access rights \"READ, WRITE\" x \"SELECT, UPDATE\". Because PostgreSQL has SET command for GUC, I introduced LET command (DB2 uses SET)Variables are visible in all sessions, but value is private per session. Variables are not transactional. The usage is wider than my proposal. Then can be changed by commands SET, SELECT INTO or they can be used like OUT parameters of procedures. The search path (or some like that) is used for variables too, but the variables has less priority than tables/columns.CREATE VARIABLE myCounter INT DEFAULT 01;SELECT EMPNO, LASTNAME, CASE WHEN myCounter = 1 THEN SALARY ELSE NULL END FROM EMPLOYEE WHERE WORKDEPT = ’A00’;SET myCounter = 29;There are (I think) different kinds of variables - accessed by the function GETVARIABLE('name', 'default) - it looks very similar ro our GUC and `current_setting` function. These variables can be set by connection string, are of varchar type and 10 values are allowed. Built-in session variables (configuration) can be accessed by the function GETVARIABLE too.SQL stored procedures supports declared local variables like PL/pgSQL4. FirebirdFirebird has something like our custom GUC. But it allow nested routines - so some functionality of session variables can be emulated with local variable and nested routines (but outer variables can be used only in Firebird 5)The variables are accessed by syntax :varname - like psql, but if I understand to diagrams, the char ':' is optional5. SQL/PSMStandard introduces a concept of modules that can be joined with schemas. The variables are like PLpgSQL, but only local - the only temp tables can be defined on module levels. These tables can be accessed only from routines assigned to modules. Modules are declarative versions of our extensions (if I understand well, I didn't find any implementation). It allows you to overwrite the search patch for routines assigned in the module. Variables are not transactional, the priority - variables/columns is not specified.6. OracleOracle PL/SQL allows the use of package variables. PL/SQL is +/- ADA language - and package variables are \"global\" variables. They are not directly visible from SQL, but Oracle allows reduced syntax for functions without arguments, so you need to write a wrapperCREATE OR REPLACE PACKAGE my_packageAS    FUNCTION get_a RETURN NUMBER;END my_package;/CREATE OR REPLACE PACKAGE BODY my_packageAS    a  NUMBER(20);    FUNCTION get_a    RETURN NUMBER    IS    BEGIN      RETURN a;    END get_a;END my_package;SELECT my_package.get_a FROM DUAL;Inside SQL the higher priority has SQL, inside non SQL commands like CALL or some PL/SQL command, the higher priority has packages.The Oracle allows both syntax for calling function with zero arguments soSELECT my_package.get_a FROM DUAL;orSELECT my_package.get_a() FROM DUAL;Then there is less risk reduction of collision. Package variables persist in sessionAnother possibility is using variables in SQL*Plus (looks like our psql variables, with possibility to define type on server side)The variable should be declared by command VARIABLE and can be accessed by syntax :varname in session before usage (maybe this step is optional)VARIABLE bv_variable_name VARCHAR2(30)BEGIN  :bv_variable_name := 'Some Value';END;SELECT column_nameFROM   table_nameWHERE  column_name = :bv_variable_name;This is something between MSSQL and MYSQL session variables - but internally it is binding parameters - what I know, Postgres cannot set these parameters as result of some pg operation.SQL*Plus is strange creature Generally, the possible collision between variables and columns are solved bya) special syntax - using prefix like @ or :b) dedicated functions c) variables has lower priority than columnsYou can see, the RDBMS allows different types of session variables, different implementations. Usually one system allows more implementation of session variables. There is a possibility of emulation implementation between RDBMS, but security setting is possible only in Oracle or DB2. RegardsPavel", "msg_date": "Mon, 3 Jun 2024 22:55:46 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\n\n> You can see, the RDBMS allows different types of session variables,\n> different implementations. Usually one system allows more implementation of\n> session variables. There is a possibility of emulation implementation\n> between RDBMS, but security setting is possible only in Oracle or DB2.\n>\n\nMySQL concept is very handy for ad hoc work, but it is too primitive for\nsecure or safe use in stored procedures.\n\nOracle concept is safe, but needs packages, needs writing wrappers, needs\nPL/SQL.\n\nI designed a concept that is very similar to DB2 (independently on IBM),\nand I think it is strong and can be well mapped to PostgreSQL (no packages,\nmore different PL, strongly typed, ...)\n\nI think it would be nice to support the MySQL concept as syntactic sugar\nfor GUC. This can be easy and for some use cases really very handy (and\nless confusing for beginners - using set_confing and current_setting is\nintuitive for work (emulation) of session variables (although the MSSQL\nsolution is less intuitive).\n\nSET @myvar TO 10; --> SELECT set_config('session.myvar', 10)\nSET @@work_mem TO '10MB'; --> SELECT set_config('work_mem', '10MB');\nSELECT @myvar; --> SELECT current_setting('session.myvar');\nSELECT @@work_mem; --> SELECT current_setting('work_mem');\n\nThe syntax @ and @@ is widely used, and the mapping can be simple. This my\nproposal is not a replacement of the proposal of \"schema\" session\nvariables. It is another concept, and I think so both can live together\nvery well, because they are used for different purposes. Oracle, DB2\nsupports +/- both concepts\n\nRegards\n\nPavel\n\n\n\n\n\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>\n>\n>\n\nHiYou can see, the RDBMS allows different types of session variables, different implementations. Usually one system allows more implementation of session variables. There is a possibility of emulation implementation between RDBMS, but security setting is possible only in Oracle or DB2. MySQL concept is very handy for ad hoc work, but it is too primitive for secure or safe use in stored procedures.Oracle concept is safe, but needs packages, needs writing wrappers, needs PL/SQL.I designed a concept that is very similar to DB2 (independently on IBM), and I think it is strong and can be well mapped to PostgreSQL (no packages, more different PL, strongly typed, ...)I think it would be nice to support the MySQL concept as syntactic sugar for GUC. This can be easy and for some use cases really very handy (and less confusing for beginners - using set_confing and current_setting is intuitive for work (emulation) of session variables (although the MSSQL solution is less intuitive). SET @myvar TO 10; --> SELECT set_config('session.myvar', 10) SET @@work_mem TO '10MB'; --> SELECT set_config('work_mem', '10MB');SELECT @myvar; --> SELECT current_setting('session.myvar');SELECT @@work_mem; --> SELECT current_setting('work_mem');The syntax @ and @@ is widely used, and the mapping can be simple. This my proposal is not a replacement of the proposal of \"schema\" session variables. It is another concept, and I think so both can live together very well, because they are used for different purposes. Oracle, DB2 supports +/- both concepts RegardsPavel RegardsPavel", "msg_date": "Mon, 3 Jun 2024 23:41:02 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "> 6. Oracle\n>\n> Oracle PL/SQL allows the use of package variables. PL/SQL is +/- ADA\n> language - and package variables are \"global\" variables. They are not\n> directly visible from SQL, but Oracle allows reduced syntax for functions\n> without arguments, so you need to write a wrapper\n>\n> CREATE OR REPLACE PACKAGE my_package\n> AS\n> FUNCTION get_a RETURN NUMBER;\n> END my_package;\n> /\n>\n> CREATE OR REPLACE PACKAGE BODY my_package\n> AS\n> a NUMBER(20);\n>\n> FUNCTION get_a\n> RETURN NUMBER\n> IS\n> BEGIN\n> RETURN a;\n> END get_a;\n> END my_package;\n>\n> SELECT my_package.get_a FROM DUAL;\n>\n> Inside SQL the higher priority has SQL, inside non SQL commands like CALL\n> or some PL/SQL command, the higher priority has packages.\n>\n\nThe risk of collision's identifier is in some PL/SQL statements less than\nin Postgres, because SQL can be used only on dedicated positions (minimally\nin older Oracle's versions). Against other databases there is not allowed\nto use SQL everywhere as an expression. PL/SQL is an independent language,\nenvironment with its own expression executor (compiler). Other databases\nallow you to use an SQL subselect (I tested MySQL, PL/pgSQL, and I think\n(if I remember docs well) it is in standard SQL/PSM (related part of\nANSI/SQL)) as expression. The integration of SQL into PL/SQL is not too\ndeep and stored procedures look more like client code executed on the\nserver side.\n\nRegards\n\nPavel\n\n6. OracleOracle PL/SQL allows the use of package variables. PL/SQL is +/- ADA language - and package variables are \"global\" variables. They are not directly visible from SQL, but Oracle allows reduced syntax for functions without arguments, so you need to write a wrapperCREATE OR REPLACE PACKAGE my_packageAS    FUNCTION get_a RETURN NUMBER;END my_package;/CREATE OR REPLACE PACKAGE BODY my_packageAS    a  NUMBER(20);    FUNCTION get_a    RETURN NUMBER    IS    BEGIN      RETURN a;    END get_a;END my_package;SELECT my_package.get_a FROM DUAL;Inside SQL the higher priority has SQL, inside non SQL commands like CALL or some PL/SQL command, the higher priority has packages.The risk of collision's identifier is in some PL/SQL statements less than in Postgres, because SQL can be used only on dedicated positions (minimally in older Oracle's versions). Against other databases there is not allowed to use SQL everywhere as an expression. PL/SQL is an independent language, environment with its own expression executor (compiler). Other databases allow you to use an SQL subselect (I tested MySQL,  PL/pgSQL, and I think (if I remember docs well) it is in standard SQL/PSM (related part of ANSI/SQL)) as expression. The integration of SQL into PL/SQL is not too deep and stored procedures look more like client code executed on the server side.RegardsPavel", "msg_date": "Wed, 5 Jun 2024 07:21:56 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nonly rebase\n\nRegards\n\nPavel", "msg_date": "Fri, 7 Jun 2024 08:05:53 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nonly rebase\n\nRegards\n\nPavel", "msg_date": "Sun, 16 Jun 2024 21:18:53 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nfresh rebase\n\nRegards\n\nPavel", "msg_date": "Wed, 19 Jun 2024 22:35:29 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" }, { "msg_contents": "Hi\n\nrebase\n\nRegards\n\nPavel", "msg_date": "Fri, 28 Jun 2024 09:03:48 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Schema variables - new implementation for Postgres 15" } ]
[ { "msg_contents": "\nThe RelationIdGetRelation() comment says:\n\n> Caller should eventually decrement count. (Usually,\n> that happens by calling RelationClose().)\n\nHowever, it doesn't do it in ReorderBufferProcessTXN().\nI think we should close it, here is a patch that fixes it. Thoughts?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\ndiff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c\nindex 52d06285a2..aac6ffc602 100644\n--- a/src/backend/replication/logical/reorderbuffer.c\n+++ b/src/backend/replication/logical/reorderbuffer.c\n@@ -2261,7 +2261,10 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,\n elog(ERROR, \"could not open relation with OID %u\", relid);\n\n if (!RelationIsLogicallyLogged(relation))\n+ {\n+ RelationClose(relation);\n continue;\n+ }\n\n relations[nrelations++] = relation;\n }\n\n\n", "msg_date": "Thu, 15 Apr 2021 18:30:14 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Thu, Apr 15, 2021 at 4:00 PM Japin Li <japinli@hotmail.com> wrote:\n>\n> The RelationIdGetRelation() comment says:\n>\n> > Caller should eventually decrement count. (Usually,\n> > that happens by calling RelationClose().)\n>\n> However, it doesn't do it in ReorderBufferProcessTXN().\n> I think we should close it, here is a patch that fixes it. Thoughts?\n>\n\n+1. Your fix looks correct to me but can we test it in some way?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 15 Apr 2021 16:53:12 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Thu, Apr 15, 2021 at 4:00 PM Japin Li <japinli@hotmail.com> wrote:\n>> \n>> The RelationIdGetRelation() comment says:\n>> \n> Caller should eventually decrement count. (Usually,\n> that happens by calling RelationClose().)\n>> \n>> However, it doesn't do it in ReorderBufferProcessTXN().\n>> I think we should close it, here is a patch that fixes it. Thoughts?\n>> \n\n> +1. Your fix looks correct to me but can we test it in some way?\n\nI think this code has a bigger problem: it should not be using\nRelationIdGetRelation and RelationClose directly. 99.44% of\nthe backend goes through relation_open or one of the other\nrelation.c wrappers, so why doesn't this?\n\nPossibly the answer is \"it copied the equally misguided code\nin pgoutput.c\". A quick grep shows nothing else doing it this\nway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 15 Apr 2021 13:26:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Thu, Apr 15, 2021 at 10:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Thu, Apr 15, 2021 at 4:00 PM Japin Li <japinli@hotmail.com> wrote:\n> >>\n> >> The RelationIdGetRelation() comment says:\n> >>\n> > Caller should eventually decrement count. (Usually,\n> > that happens by calling RelationClose().)\n> >>\n> >> However, it doesn't do it in ReorderBufferProcessTXN().\n> >> I think we should close it, here is a patch that fixes it. Thoughts?\n> >>\n>\n> > +1. Your fix looks correct to me but can we test it in some way?\n>\n> I think this code has a bigger problem: it should not be using\n> RelationIdGetRelation and RelationClose directly. 99.44% of\n> the backend goes through relation_open or one of the other\n> relation.c wrappers, so why doesn't this?\n>\n\nI think it is because relation_open expects either caller to have a\nlock on the relation or don't use 'NoLock' lockmode. AFAIU, we don't\nneed to acquire a lock on relation while decoding changes from WAL\nbecause it uses a historic snapshot to build a relcache entry and all\nthe later changes to the rel are absorbed while decoding WAL.\n\nI think it is also important to *not* acquire any lock on relation\notherwise it can lead to some sort of deadlock or infinite wait in the\ndecoding process. Consider a case for operations like Truncate (or if\nthe user has acquired an exclusive lock on the relation in some other\nway say via Lock command) which acquires an exclusive lock on\nrelation, it won't get replicated in synchronous mode (when\nsynchronous_standby_name is configured). The truncate operation will\nwait for the transaction to be replicated to the subscriber and the\ndecoding process will wait for the Truncate operation to finish.\n\n> Possibly the answer is \"it copied the equally misguided code\n> in pgoutput.c\".\n>\n\nI think it is following what is done during decoding, otherwise, it\nwill lead to the problems as described above. We are already\ndiscussing one of the similar problems [1] where pgoutput\nunintentionally acquired a lock on the index and lead to a sort of\ndeadlock.\n\nIf the above understanding is correct, I think we might want to\nimprove comments in this area.\n\n[1] - https://www.postgresql.org/message-id/OS0PR01MB6113C2499C7DC70EE55ADB82FB759%40OS0PR01MB6113.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 16 Apr 2021 08:42:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "Hi,\n\nOn 2021-04-16 08:42:40 +0530, Amit Kapila wrote:\n> I think it is because relation_open expects either caller to have a\n> lock on the relation or don't use 'NoLock' lockmode. AFAIU, we don't\n> need to acquire a lock on relation while decoding changes from WAL\n> because it uses a historic snapshot to build a relcache entry and all\n> the later changes to the rel are absorbed while decoding WAL.\n\nRight.\n\n\n> I think it is also important to *not* acquire any lock on relation\n> otherwise it can lead to some sort of deadlock or infinite wait in the\n> decoding process. Consider a case for operations like Truncate (or if\n> the user has acquired an exclusive lock on the relation in some other\n> way say via Lock command) which acquires an exclusive lock on\n> relation, it won't get replicated in synchronous mode (when\n> synchronous_standby_name is configured). The truncate operation will\n> wait for the transaction to be replicated to the subscriber and the\n> decoding process will wait for the Truncate operation to finish.\n\nHowever, this cannot be really relied upon for catalog tables. An output\nfunction might acquire locks or such. But for those we do not need to\ndecode contents...\n\n\n\nThis made me take a brief look at pgoutput.c - maybe I am missing\nsomething, but how is the following not a memory leak?\n\nstatic void\nmaybe_send_schema(LogicalDecodingContext *ctx,\n ReorderBufferTXN *txn, ReorderBufferChange *change,\n Relation relation, RelationSyncEntry *relentry)\n{\n...\n /* Map must live as long as the session does. */\n oldctx = MemoryContextSwitchTo(CacheMemoryContext);\n relentry->map = convert_tuples_by_name(CreateTupleDescCopy(indesc),\n CreateTupleDescCopy(outdesc));\n MemoryContextSwitchTo(oldctx);\n send_relation_and_attrs(ancestor, xid, ctx);\n RelationClose(ancestor);\n\nIf - and that's common - convert_tuples_by_name() won't have to do\nanything, the copied tuple descs will be permanently leaked.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 16 Apr 2021 10:54:49 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Fri, Apr 16, 2021 at 11:24 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> > I think it is also important to *not* acquire any lock on relation\n> > otherwise it can lead to some sort of deadlock or infinite wait in the\n> > decoding process. Consider a case for operations like Truncate (or if\n> > the user has acquired an exclusive lock on the relation in some other\n> > way say via Lock command) which acquires an exclusive lock on\n> > relation, it won't get replicated in synchronous mode (when\n> > synchronous_standby_name is configured). The truncate operation will\n> > wait for the transaction to be replicated to the subscriber and the\n> > decoding process will wait for the Truncate operation to finish.\n>\n> However, this cannot be really relied upon for catalog tables. An output\n> function might acquire locks or such. But for those we do not need to\n> decode contents...\n>\n\nTrue, so, if we don't need to decode contents then we won't have the\nproblems of the above kind.\n\n>\n>\n> This made me take a brief look at pgoutput.c - maybe I am missing\n> something, but how is the following not a memory leak?\n>\n> static void\n> maybe_send_schema(LogicalDecodingContext *ctx,\n> ReorderBufferTXN *txn, ReorderBufferChange *change,\n> Relation relation, RelationSyncEntry *relentry)\n> {\n> ...\n> /* Map must live as long as the session does. */\n> oldctx = MemoryContextSwitchTo(CacheMemoryContext);\n> relentry->map = convert_tuples_by_name(CreateTupleDescCopy(indesc),\n> CreateTupleDescCopy(outdesc));\n> MemoryContextSwitchTo(oldctx);\n> send_relation_and_attrs(ancestor, xid, ctx);\n> RelationClose(ancestor);\n>\n> If - and that's common - convert_tuples_by_name() won't have to do\n> anything, the copied tuple descs will be permanently leaked.\n>\n\nI also think this is a permanent leak. I think we need to free all the\nmemory associated with this map on the invalidation of this particular\nrelsync entry (basically in rel_sync_cache_relation_cb).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 17 Apr 2021 10:00:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Thu, Apr 15, 2021 at 4:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 15, 2021 at 4:00 PM Japin Li <japinli@hotmail.com> wrote:\n> >\n> > The RelationIdGetRelation() comment says:\n> >\n> > > Caller should eventually decrement count. (Usually,\n> > > that happens by calling RelationClose().)\n> >\n> > However, it doesn't do it in ReorderBufferProcessTXN().\n> > I think we should close it, here is a patch that fixes it. Thoughts?\n> >\n>\n> +1. Your fix looks correct to me but can we test it in some way?\n>\n\nI have tried to find a test but not able to find one. I have tried\nwith a foreign table but we don't log truncate for it, see\nExecuteTruncate. It has a check that it will log for relids where\nRelationIsLogicallyLogged. If that is the case, it is not clear to me\nhow we can ever hit this condition? Have you tried to find the test?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 17 Apr 2021 11:39:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Fri, Apr 16, 2021 at 11:24 PM Andres Freund <andres@anarazel.de> wrote:\n>\n>\n> > I think it is also important to *not* acquire any lock on relation\n> > otherwise it can lead to some sort of deadlock or infinite wait in the\n> > decoding process. Consider a case for operations like Truncate (or if\n> > the user has acquired an exclusive lock on the relation in some other\n> > way say via Lock command) which acquires an exclusive lock on\n> > relation, it won't get replicated in synchronous mode (when\n> > synchronous_standby_name is configured). The truncate operation will\n> > wait for the transaction to be replicated to the subscriber and the\n> > decoding process will wait for the Truncate operation to finish.\n>\n> However, this cannot be really relied upon for catalog tables. An output\n> function might acquire locks or such. But for those we do not need to\n> decode contents...\n>\n\nI see that if we define a user_catalog_table (create table t1_cat(c1\nint) WITH(user_catalog_table = true);), we are able to decode\noperations like (insert, truncate) on such a table. What do you mean\nby \"But for those we do not need to decode contents\"?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 17 Apr 2021 12:01:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "\nOn Sat, 17 Apr 2021 at 14:09, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Thu, Apr 15, 2021 at 4:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Thu, Apr 15, 2021 at 4:00 PM Japin Li <japinli@hotmail.com> wrote:\n>> >\n>> > The RelationIdGetRelation() comment says:\n>> >\n>> > > Caller should eventually decrement count. (Usually,\n>> > > that happens by calling RelationClose().)\n>> >\n>> > However, it doesn't do it in ReorderBufferProcessTXN().\n>> > I think we should close it, here is a patch that fixes it. Thoughts?\n>> >\n>>\n>> +1. Your fix looks correct to me but can we test it in some way?\n>>\n>\n> I have tried to find a test but not able to find one. I have tried\n> with a foreign table but we don't log truncate for it, see\n> ExecuteTruncate. It has a check that it will log for relids where\n> RelationIsLogicallyLogged. If that is the case, it is not clear to me\n> how we can ever hit this condition? Have you tried to find the test?\n\nI also don't find a test for this. It is introduced in 5dfd1e5a6696,\nwrote by Simon Riggs, Marco Nenciarini and Peter Eisentraut. Maybe they\ncan explain when we can enter this condition?\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Sat, 17 Apr 2021 14:35:20 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Sat, Apr 17, 2021 at 12:05 PM Japin Li <japinli@hotmail.com> wrote:\n>\n> On Sat, 17 Apr 2021 at 14:09, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Thu, Apr 15, 2021 at 4:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >> On Thu, Apr 15, 2021 at 4:00 PM Japin Li <japinli@hotmail.com> wrote:\n> >> >\n> >> > The RelationIdGetRelation() comment says:\n> >> >\n> >> > > Caller should eventually decrement count. (Usually,\n> >> > > that happens by calling RelationClose().)\n> >> >\n> >> > However, it doesn't do it in ReorderBufferProcessTXN().\n> >> > I think we should close it, here is a patch that fixes it. Thoughts?\n> >> >\n> >>\n> >> +1. Your fix looks correct to me but can we test it in some way?\n> >>\n> >\n> > I have tried to find a test but not able to find one. I have tried\n> > with a foreign table but we don't log truncate for it, see\n> > ExecuteTruncate. It has a check that it will log for relids where\n> > RelationIsLogicallyLogged. If that is the case, it is not clear to me\n> > how we can ever hit this condition? Have you tried to find the test?\n>\n> I also don't find a test for this. It is introduced in 5dfd1e5a6696,\n> wrote by Simon Riggs, Marco Nenciarini and Peter Eisentraut. Maybe they\n> can explain when we can enter this condition?\n>\n\nMy guess is that this has been copied from the code a few lines above\nto handle insert/update/delete where it is required to handle some DDL\nops like Alter Table but I think we don't need it here (for Truncate\nop). If that understanding turns out to be true then we should either\nhave an Assert for this or an elog message.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 17 Apr 2021 12:43:11 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Sat, Apr 17, 2021 at 12:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Apr 16, 2021 at 11:24 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> >\n> > > I think it is also important to *not* acquire any lock on relation\n> > > otherwise it can lead to some sort of deadlock or infinite wait in the\n> > > decoding process. Consider a case for operations like Truncate (or if\n> > > the user has acquired an exclusive lock on the relation in some other\n> > > way say via Lock command) which acquires an exclusive lock on\n> > > relation, it won't get replicated in synchronous mode (when\n> > > synchronous_standby_name is configured). The truncate operation will\n> > > wait for the transaction to be replicated to the subscriber and the\n> > > decoding process will wait for the Truncate operation to finish.\n> >\n> > However, this cannot be really relied upon for catalog tables. An output\n> > function might acquire locks or such. But for those we do not need to\n> > decode contents...\n> >\n>\n> I see that if we define a user_catalog_table (create table t1_cat(c1\n> int) WITH(user_catalog_table = true);), we are able to decode\n> operations like (insert, truncate) on such a table. What do you mean\n> by \"But for those we do not need to decode contents\"?\n>\n\nI think we are allowed to decode the operations on user catalog tables\nbecause we are using RelationIsLogicallyLogged() instead of\nRelationIsAccessibleInLogicalDecoding() in ReorderBufferProcessTXN().\nBased on this discussion, I think we should not be allowing decoding\nof operations on user catalog tables, so we should use\nRelationIsAccessibleInLogicalDecoding to skip such ops in\nReorderBufferProcessTXN(). Am, I missing something?\n\nCan you please clarify?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 19 Apr 2021 17:52:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Sat, Apr 17, 2021 at 1:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Fri, Apr 16, 2021 at 11:24 PM Andres Freund <andres@anarazel.de> wrote:> > This made me take a brief look at pgoutput.c - maybe I am missing\n> > something, but how is the following not a memory leak?\n> >\n> > static void\n> > maybe_send_schema(LogicalDecodingContext *ctx,\n> > ReorderBufferTXN *txn, ReorderBufferChange *change,\n> > Relation relation, RelationSyncEntry *relentry)\n> > {\n> > ...\n> > /* Map must live as long as the session does. */\n> > oldctx = MemoryContextSwitchTo(CacheMemoryContext);\n> > relentry->map = convert_tuples_by_name(CreateTupleDescCopy(indesc),\n> > CreateTupleDescCopy(outdesc));\n> > MemoryContextSwitchTo(oldctx);\n> > send_relation_and_attrs(ancestor, xid, ctx);\n> > RelationClose(ancestor);\n> >\n> > If - and that's common - convert_tuples_by_name() won't have to do\n> > anything, the copied tuple descs will be permanently leaked.\n> >\n>\n> I also think this is a permanent leak. I think we need to free all the\n> memory associated with this map on the invalidation of this particular\n> relsync entry (basically in rel_sync_cache_relation_cb).\n\nI agree there's a problem here.\n\nBack in:\n\nhttps://www.postgresql.org/message-id/CA%2BHiwqEeU19iQgjN6HF1HTPU0L5%2BJxyS5CmxaOVGNXBAfUY06Q%40mail.gmail.com\n\nI had proposed to move the map creation from maybe_send_schema() to\nget_rel_sync_entry(), mainly because the latter is where I realized it\nbelongs, though a bit too late. Attached is the part of the patch\nfor this particular issue. It also takes care to release the copied\nTupleDescs if the map is found to be unnecessary, thus preventing\nleaking into CacheMemoryContext.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 20 Apr 2021 12:06:44 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Saturday, April 17, 2021 4:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Sat, Apr 17, 2021 at 12:05 PM Japin Li <japinli@hotmail.com> wrote:\r\n> >\r\n> > On Sat, 17 Apr 2021 at 14:09, Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > > On Thu, Apr 15, 2021 at 4:53 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >>\r\n> > >> On Thu, Apr 15, 2021 at 4:00 PM Japin Li <japinli@hotmail.com> wrote:\r\n> > >> >\r\n> > >> > The RelationIdGetRelation() comment says:\r\n> > >> >\r\n> > >> > > Caller should eventually decrement count. (Usually, that\r\n> > >> > > happens by calling RelationClose().)\r\n> > >> >\r\n> > >> > However, it doesn't do it in ReorderBufferProcessTXN().\r\n> > >> > I think we should close it, here is a patch that fixes it. Thoughts?\r\n> > >> >\r\n> > >>\r\n> > >> +1. Your fix looks correct to me but can we test it in some way?\r\n> > >>\r\n> > >\r\n> > > I have tried to find a test but not able to find one. I have tried\r\n> > > with a foreign table but we don't log truncate for it, see\r\n> > > ExecuteTruncate. It has a check that it will log for relids where\r\n> > > RelationIsLogicallyLogged. If that is the case, it is not clear to\r\n> > > me how we can ever hit this condition? Have you tried to find the test?\r\n> >\r\n> > I also don't find a test for this. It is introduced in 5dfd1e5a6696,\r\n> > wrote by Simon Riggs, Marco Nenciarini and Peter Eisentraut. Maybe\r\n> > they can explain when we can enter this condition?\r\n> \r\n> My guess is that this has been copied from the code a few lines above to\r\n> handle insert/update/delete where it is required to handle some DDL ops like\r\n> Alter Table but I think we don't need it here (for Truncate op). If that\r\n> understanding turns out to be true then we should either have an Assert for\r\n> this or an elog message.\r\nIn this thread, we are discussing 3 topics below...\r\n\r\n(1) necessity of the check for REORDER_BUFFER_CHANGE_TRUNCATE in ReorderBufferProcessTXN()\r\n(2) discussion of whether we disallow decoding of operations on user catalog tables or not\r\n(3) memory leak of maybe_send_schema() (patch already provided)\r\n\r\nLet's address those one by one.\r\nIn terms of (1), which was close to the motivation of this thread,\r\nfirst of all, I traced the truncate processing\r\nand I think the check is done by truncate command side as well.\r\nI preferred Assert rather than never called elog,\r\nbut it's OK to choose elog if someone has strong opinion on it.\r\nAttached the patch for this.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi", "msg_date": "Fri, 23 Apr 2021 14:33:53 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "\nOn Fri, 23 Apr 2021 at 22:33, osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com> wrote:\n> On Saturday, April 17, 2021 4:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> On Sat, Apr 17, 2021 at 12:05 PM Japin Li <japinli@hotmail.com> wrote:\n>> >\n>> > On Sat, 17 Apr 2021 at 14:09, Amit Kapila <amit.kapila16@gmail.com>\n>> wrote:\n>> > > On Thu, Apr 15, 2021 at 4:53 PM Amit Kapila <amit.kapila16@gmail.com>\n>> wrote:\n>> > >>\n>> > >> On Thu, Apr 15, 2021 at 4:00 PM Japin Li <japinli@hotmail.com> wrote:\n>> > >> >\n>> > >> > The RelationIdGetRelation() comment says:\n>> > >> >\n>> > >> > > Caller should eventually decrement count. (Usually, that\n>> > >> > > happens by calling RelationClose().)\n>> > >> >\n>> > >> > However, it doesn't do it in ReorderBufferProcessTXN().\n>> > >> > I think we should close it, here is a patch that fixes it. Thoughts?\n>> > >> >\n>> > >>\n>> > >> +1. Your fix looks correct to me but can we test it in some way?\n>> > >>\n>> > >\n>> > > I have tried to find a test but not able to find one. I have tried\n>> > > with a foreign table but we don't log truncate for it, see\n>> > > ExecuteTruncate. It has a check that it will log for relids where\n>> > > RelationIsLogicallyLogged. If that is the case, it is not clear to\n>> > > me how we can ever hit this condition? Have you tried to find the test?\n>> >\n>> > I also don't find a test for this. It is introduced in 5dfd1e5a6696,\n>> > wrote by Simon Riggs, Marco Nenciarini and Peter Eisentraut. Maybe\n>> > they can explain when we can enter this condition?\n>> \n>> My guess is that this has been copied from the code a few lines above to\n>> handle insert/update/delete where it is required to handle some DDL ops like\n>> Alter Table but I think we don't need it here (for Truncate op). If that\n>> understanding turns out to be true then we should either have an Assert for\n>> this or an elog message.\n> In this thread, we are discussing 3 topics below...\n>\n> (1) necessity of the check for REORDER_BUFFER_CHANGE_TRUNCATE in ReorderBufferProcessTXN()\n> (2) discussion of whether we disallow decoding of operations on user catalog tables or not\n> (3) memory leak of maybe_send_schema() (patch already provided)\n>\n> Let's address those one by one.\n> In terms of (1), which was close to the motivation of this thread,\n> first of all, I traced the truncate processing\n> and I think the check is done by truncate command side as well.\n> I preferred Assert rather than never called elog,\n> but it's OK to choose elog if someone has strong opinion on it.\n> Attached the patch for this.\n>\n\n+1, make check-world passed.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Sun, 25 Apr 2021 11:23:42 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Fri, Apr 23, 2021 at 8:03 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Saturday, April 17, 2021 4:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > I also don't find a test for this. It is introduced in 5dfd1e5a6696,\n> > > wrote by Simon Riggs, Marco Nenciarini and Peter Eisentraut. Maybe\n> > > they can explain when we can enter this condition?\n> >\n> > My guess is that this has been copied from the code a few lines above to\n> > handle insert/update/delete where it is required to handle some DDL ops like\n> > Alter Table but I think we don't need it here (for Truncate op). If that\n> > understanding turns out to be true then we should either have an Assert for\n> > this or an elog message.\n> In this thread, we are discussing 3 topics below...\n>\n> (1) necessity of the check for REORDER_BUFFER_CHANGE_TRUNCATE in ReorderBufferProcessTXN()\n> (2) discussion of whether we disallow decoding of operations on user catalog tables or not\n> (3) memory leak of maybe_send_schema() (patch already provided)\n>\n> Let's address those one by one.\n> In terms of (1), which was close to the motivation of this thread,\n>\n\nI think (1) and (2) are related because if we need (2) then the check\nremoved by (1) needs to be replaced with another check. So, I am not\nsure how to make this decision.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 26 Apr 2021 10:34:31 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Tuesday, April 20, 2021 12:07 PM Amit Langote <amitlangote09@gmail.com> wrote:\r\n> On Sat, Apr 17, 2021 at 1:30 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > On Fri, Apr 16, 2021 at 11:24 PM Andres Freund <andres@anarazel.de>\r\n> > wrote:> > This made me take a brief look at pgoutput.c - maybe I am\r\n> > missing\r\n> > > something, but how is the following not a memory leak?\r\n> > >\r\n> > > static void\r\n> > > maybe_send_schema(LogicalDecodingContext *ctx,\r\n> > > ReorderBufferTXN *txn, ReorderBufferChange\r\n> *change,\r\n> > > Relation relation, RelationSyncEntry *relentry) {\r\n> > > ...\r\n> > > /* Map must live as long as the session does. */\r\n> > > oldctx = MemoryContextSwitchTo(CacheMemoryContext);\r\n> > > relentry->map =\r\n> convert_tuples_by_name(CreateTupleDescCopy(indesc),\r\n> > >\r\n> CreateTupleDescCopy(outdesc));\r\n> > > MemoryContextSwitchTo(oldctx);\r\n> > > send_relation_and_attrs(ancestor, xid, ctx);\r\n> > > RelationClose(ancestor);\r\n> > >\r\n> > > If - and that's common - convert_tuples_by_name() won't have to do\r\n> > > anything, the copied tuple descs will be permanently leaked.\r\n> > >\r\n> >\r\n> > I also think this is a permanent leak. I think we need to free all the\r\n> > memory associated with this map on the invalidation of this particular\r\n> > relsync entry (basically in rel_sync_cache_relation_cb).\r\n> \r\n> I agree there's a problem here.\r\n> \r\n> Back in:\r\n> \r\n> https://www.postgresql.org/message-id/CA%2BHiwqEeU19iQgjN6HF1HTP\r\n> U0L5%2BJxyS5CmxaOVGNXBAfUY06Q%40mail.gmail.com\r\n> \r\n> I had proposed to move the map creation from maybe_send_schema() to\r\n> get_rel_sync_entry(), mainly because the latter is where I realized it\r\n> belongs, though a bit too late. Attached is the part of the patch\r\n> for this particular issue. It also takes care to release the copied TupleDescs\r\n> if the map is found to be unnecessary, thus preventing leaking into\r\n> CacheMemoryContext.\r\nThank you for sharing the patch.\r\nYour patch looks correct to me. Make check-world has\r\npassed with it. Also, I agree with the idea to place\r\nthe processing to set the map in the get_rel_sync_entry.\r\n\r\nOne thing I'd like to ask is an advanced way to confirm\r\nthe memory leak is solved by the patch, not just by running make check-world.\r\n\r\nI used OSS HEAD and valgrind, expecting that\r\nI could see function stack which has a call of CreateTupleDescCopy\r\nfrom both pgoutput_change and pgoutput_truncate as memory leak report\r\nin the valgrind logs, and they disappear after applying the patch.\r\n\r\nBut, I cannot find the pair of pgoutput functions and CreateTupleDescCopy in one report\r\nwhen I used OSS HEAD, which means that I need to do advanced testing to check if\r\nthe memory leak of CreateTupleDescCopy is addressed.\r\nI collected the logs from RT at src/test/subscription so should pass the routes of our interest.\r\n\r\nCould someone give me\r\nan advise about the way to confirm the memory leak is solved ?\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Tue, 27 Apr 2021 12:37:00 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Monday, April 26, 2021 2:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Fri, Apr 23, 2021 at 8:03 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Saturday, April 17, 2021 4:13 PM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> > > > I also don't find a test for this. It is introduced in\r\n> > > > 5dfd1e5a6696, wrote by Simon Riggs, Marco Nenciarini and Peter\r\n> > > > Eisentraut. Maybe they can explain when we can enter this condition?\r\n> > >\r\n> > > My guess is that this has been copied from the code a few lines\r\n> > > above to handle insert/update/delete where it is required to handle\r\n> > > some DDL ops like Alter Table but I think we don't need it here (for\r\n> > > Truncate op). If that understanding turns out to be true then we\r\n> > > should either have an Assert for this or an elog message.\r\n> > In this thread, we are discussing 3 topics below...\r\n> >\r\n> > (1) necessity of the check for REORDER_BUFFER_CHANGE_TRUNCATE\r\n> in\r\n> > ReorderBufferProcessTXN()\r\n> > (2) discussion of whether we disallow decoding of operations on user\r\n> > catalog tables or not\r\n> > (3) memory leak of maybe_send_schema() (patch already provided)\r\n> >\r\n> > Let's address those one by one.\r\n> > In terms of (1), which was close to the motivation of this thread,\r\n> >\r\n> \r\n> I think (1) and (2) are related because if we need (2) then the check removed\r\n> by (1) needs to be replaced with another check. So, I am not sure how to\r\n> make this decision.\r\nYeah, you are right.\r\n\r\n\r\nOn Monday, April 19, 2021 9:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Sat, Apr 17, 2021 at 12:01 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > On Fri, Apr 16, 2021 at 11:24 PM Andres Freund <andres@anarazel.de>\r\n> wrote:\r\n> > >\r\n> > > > I think it is also important to *not* acquire any lock on relation\r\n> > > > otherwise it can lead to some sort of deadlock or infinite wait in\r\n> > > > the decoding process. Consider a case for operations like Truncate\r\n> > > > (or if the user has acquired an exclusive lock on the relation in\r\n> > > > some other way say via Lock command) which acquires an exclusive\r\n> > > > lock on relation, it won't get replicated in synchronous mode\r\n> > > > (when synchronous_standby_name is configured). The truncate\r\n> > > > operation will wait for the transaction to be replicated to the\r\n> > > > subscriber and the decoding process will wait for the Truncate\r\n> operation to finish.\r\n> > >\r\n> > > However, this cannot be really relied upon for catalog tables. An\r\n> > > output function might acquire locks or such. But for those we do not\r\n> > > need to decode contents...\r\n> > >\r\n> >\r\n> > I see that if we define a user_catalog_table (create table t1_cat(c1\r\n> > int) WITH(user_catalog_table = true);), we are able to decode\r\n> > operations like (insert, truncate) on such a table. What do you mean\r\n> > by \"But for those we do not need to decode contents\"?\r\n> >\r\n> \r\n> I think we are allowed to decode the operations on user catalog tables\r\n> because we are using RelationIsLogicallyLogged() instead of\r\n> RelationIsAccessibleInLogicalDecoding() in ReorderBufferProcessTXN().\r\n> Based on this discussion, I think we should not be allowing decoding of\r\n> operations on user catalog tables, so we should use\r\n> RelationIsAccessibleInLogicalDecoding to skip such ops in\r\n> ReorderBufferProcessTXN(). Am, I missing something?\r\n> \r\n> Can you please clarify?\r\nI don't understand that point, either.\r\n\r\nI read the context where the user_catalog_table was introduced - [1].\r\nThere, I couldn't find any discussion if we should skip decode operations\r\non that kind of tables or not. Accordingly, we just did not conclude it, I suppose.\r\n\r\nWhat surprised me a bit is to decode operations of system catalog table are considered like [2]\r\nsomehow at the time. I cannot find any concrete description of such use cases in the thread, though.\r\n\r\nAnyway, I felt disallowing decoding of operations on user catalog tables\r\ndoesn't spoil the feature's purpose. So, I'm OK to do so. What do you think ?\r\n\r\n\r\n[1] - https://www.postgresql.org/message-id/flat/20130914204913.GA4071%40awork2.anarazel.de\r\n\r\nNote that in this discussion, user_catalog_table was renamed from\r\ntreat_as_catalog_table in the middle of the thread. Searching it might help you to shorten your time to have a look at it.\r\n\r\n[2] - https://www.postgresql.org/message-id/CA%2BTgmobhDCHuckL_86wRDWJ31Gw3Y1HrQ4yUKEn7U1_hTbeVqQ%40mail.gmail.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Wed, 28 Apr 2021 12:06:45 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Wed, Apr 28, 2021 at 5:36 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Monday, April 26, 2021 2:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Fri, Apr 23, 2021 at 8:03 PM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > I think we are allowed to decode the operations on user catalog tables\n> > because we are using RelationIsLogicallyLogged() instead of\n> > RelationIsAccessibleInLogicalDecoding() in ReorderBufferProcessTXN().\n> > Based on this discussion, I think we should not be allowing decoding of\n> > operations on user catalog tables, so we should use\n> > RelationIsAccessibleInLogicalDecoding to skip such ops in\n> > ReorderBufferProcessTXN(). Am, I missing something?\n> >\n> > Can you please clarify?\n> I don't understand that point, either.\n>\n> I read the context where the user_catalog_table was introduced - [1].\n> There, I couldn't find any discussion if we should skip decode operations\n> on that kind of tables or not. Accordingly, we just did not conclude it, I suppose.\n>\n> What surprised me a bit is to decode operations of system catalog table are considered like [2]\n> somehow at the time. I cannot find any concrete description of such use cases in the thread, though.\n>\n> Anyway, I felt disallowing decoding of operations on user catalog tables\n> doesn't spoil the feature's purpose. So, I'm OK to do so. What do you think ?\n>\n\nI am not so sure about it because I think we don't have any example of\nuser_catalog_tables in the core code. This is the reason I was kind of\nlooking towards Andres to clarify this. Right now, if the user\nperforms TRUNCATE on user_catalog_table in synchronous mode then it\nwill hang in case the decoding plugin takes even share lock on it. The\nmain reason is that we allow decoding of TRUNCATE operation for\nuser_catalog_tables. I think even if we want to allow decoding of\nother operations on user_catalog_table, the decoding of TRUNCATE\nshould be prohibited but maybe we shouldn't allow decoding of any\noperation on such tables as we don't do it for system catalog tables.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 29 Apr 2021 11:00:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "Takamichi-san,\n\nOn Tue, Apr 27, 2021 at 9:37 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n> On Tuesday, April 20, 2021 12:07 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Sat, Apr 17, 2021 at 1:30 PM Amit Kapila <amit.kapila16@gmail.com>\n> > wrote:\n> > > On Fri, Apr 16, 2021 at 11:24 PM Andres Freund <andres@anarazel.de>\n> > > wrote:> > This made me take a brief look at pgoutput.c - maybe I am\n> > > missing\n> > > > something, but how is the following not a memory leak?\n> > > >\n> > > > static void\n> > > > maybe_send_schema(LogicalDecodingContext *ctx,\n> > > > ReorderBufferTXN *txn, ReorderBufferChange\n> > *change,\n> > > > Relation relation, RelationSyncEntry *relentry) {\n> > > > ...\n> > > > /* Map must live as long as the session does. */\n> > > > oldctx = MemoryContextSwitchTo(CacheMemoryContext);\n> > > > relentry->map =\n> > convert_tuples_by_name(CreateTupleDescCopy(indesc),\n> > > >\n> > CreateTupleDescCopy(outdesc));\n> > > > MemoryContextSwitchTo(oldctx);\n> > > > send_relation_and_attrs(ancestor, xid, ctx);\n> > > > RelationClose(ancestor);\n> > > >\n> > > > If - and that's common - convert_tuples_by_name() won't have to do\n> > > > anything, the copied tuple descs will be permanently leaked.\n> > > >\n> > >\n> > > I also think this is a permanent leak. I think we need to free all the\n> > > memory associated with this map on the invalidation of this particular\n> > > relsync entry (basically in rel_sync_cache_relation_cb).\n> >\n> > I agree there's a problem here.\n> >\n> > Back in:\n> >\n> > https://www.postgresql.org/message-id/CA%2BHiwqEeU19iQgjN6HF1HTP\n> > U0L5%2BJxyS5CmxaOVGNXBAfUY06Q%40mail.gmail.com\n> >\n> > I had proposed to move the map creation from maybe_send_schema() to\n> > get_rel_sync_entry(), mainly because the latter is where I realized it\n> > belongs, though a bit too late. Attached is the part of the patch\n> > for this particular issue. It also takes care to release the copied TupleDescs\n> > if the map is found to be unnecessary, thus preventing leaking into\n> > CacheMemoryContext.\n>\n> Thank you for sharing the patch.\n> Your patch looks correct to me. Make check-world has\n> passed with it. Also, I agree with the idea to place\n> the processing to set the map in the get_rel_sync_entry.\n\nThanks for checking.\n\n> One thing I'd like to ask is an advanced way to confirm\n> the memory leak is solved by the patch, not just by running make check-world.\n>\n> I used OSS HEAD and valgrind, expecting that\n> I could see function stack which has a call of CreateTupleDescCopy\n> from both pgoutput_change and pgoutput_truncate as memory leak report\n> in the valgrind logs, and they disappear after applying the patch.\n>\n> But, I cannot find the pair of pgoutput functions and CreateTupleDescCopy in one report\n> when I used OSS HEAD, which means that I need to do advanced testing to check if\n> the memory leak of CreateTupleDescCopy is addressed.\n> I collected the logs from RT at src/test/subscription so should pass the routes of our interest.\n>\n> Could someone give me\n> an advise about the way to confirm the memory leak is solved ?\n\nI have not used valgrind or other testing methods to check this.\n\nTo me, it's clear in this case by only looking at the code that the\nTupleDescs returned by CreateTupleDescCopy() ought to be freed when\nconvert_tuples_by_name() determines that no map is necessary such that\nthere will be no need to keep those TupleDesc copies around. Failing\nthat, those copies end up in CacheMemoryContext without anything\npointing to them, hence the leak. Actually, since maybe_send_schema()\ndoesn't execute this code if schema_sent is found to have been set by\nearlier calls, the leak in question should occur only once in most\ntests.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 May 2021 13:20:36 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Thursday, April 29, 2021 2:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> I am not so sure about it because I think we don't have any example of\r\n> user_catalog_tables in the core code. This is the reason I was kind of looking\r\n> towards Andres to clarify this. Right now, if the user performs TRUNCATE on\r\n> user_catalog_table in synchronous mode then it will hang in case the\r\n> decoding plugin takes even share lock on it. The main reason is that we allow\r\n> decoding of TRUNCATE operation for user_catalog_tables. I think even if we\r\n> want to allow decoding of other operations on user_catalog_table, the\r\n> decoding of TRUNCATE should be prohibited but maybe we shouldn't allow\r\n> decoding of any operation on such tables as we don't do it for system catalog\r\n> tables.\r\n\r\nI tried the following scenarios for trying to reproduce this.\r\n\r\nScenario1:\r\n(1) set up 1 publisher and 1 subscriber\r\n(2) create table with user_catalog_table = true on the pub\r\n(3) insert some data to this table\r\n(4) create publication for the table on the pub\r\n(5) create table with user_catalog_table = true on the sub\r\n(6) create subscription on the sub\r\n(7) add synchronous_standby_names to publisher's configuration and restart the pub\r\n(8) have 1 session to hold a lock to the user_catalog_table on the pub in access share mode\r\n(9) have another session to truncate the user_catalog_table on the pub\r\n\r\nHere, It keeps waiting but I'm not sure this is the scenario described above,\r\nsince this deadlock is caused by (8)'s lock.\r\n\r\nScenario2:\r\n(1) set up 1 publisher and 1 subscriber\r\n(2) create table with user_catalog_table = true on the pub\r\n(3) insert some data to this table\r\n(4) create publication for the table on the pub\r\n(5) create table with user_catalog_table = true on the sub\r\n(6) create subscription on the sub\r\n(7) add synchronous_standby_names to publisher's configuration and restart the pub\r\n(8) have a session to truncate the user_catalog_table on the pub\r\n\r\nScenario 2 was successful.\r\n\r\nAre these the scenario you have in mind,\r\nif not please let me know for the missing steps.\r\nI would like to reproduce the scenario and write a patch to fix this.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Thu, 13 May 2021 05:45:16 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Thu, May 13, 2021 at 11:15 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Thursday, April 29, 2021 2:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I am not so sure about it because I think we don't have any example of\n> > user_catalog_tables in the core code. This is the reason I was kind of looking\n> > towards Andres to clarify this. Right now, if the user performs TRUNCATE on\n> > user_catalog_table in synchronous mode then it will hang in case the\n> > decoding plugin takes even share lock on it. The main reason is that we allow\n> > decoding of TRUNCATE operation for user_catalog_tables. I think even if we\n> > want to allow decoding of other operations on user_catalog_table, the\n> > decoding of TRUNCATE should be prohibited but maybe we shouldn't allow\n> > decoding of any operation on such tables as we don't do it for system catalog\n> > tables.\n>\n> I tried the following scenarios for trying to reproduce this.\n>\n> Scenario1:\n> (1) set up 1 publisher and 1 subscriber\n> (2) create table with user_catalog_table = true on the pub\n> (3) insert some data to this table\n> (4) create publication for the table on the pub\n> (5) create table with user_catalog_table = true on the sub\n> (6) create subscription on the sub\n> (7) add synchronous_standby_names to publisher's configuration and restart the pub\n> (8) have 1 session to hold a lock to the user_catalog_table on the pub in access share mode\n> (9) have another session to truncate the user_catalog_table on the pub\n>\n> Here, It keeps waiting but I'm not sure this is the scenario described above,\n> since this deadlock is caused by (8)'s lock.\n>\n\nThis is a lock time-out scenario, not a deadlock.\n\n> Scenario2:\n> (1) set up 1 publisher and 1 subscriber\n> (2) create table with user_catalog_table = true on the pub\n> (3) insert some data to this table\n> (4) create publication for the table on the pub\n> (5) create table with user_catalog_table = true on the sub\n> (6) create subscription on the sub\n> (7) add synchronous_standby_names to publisher's configuration and restart the pub\n> (8) have a session to truncate the user_catalog_table on the pub\n>\n> Scenario 2 was successful.\n>\n\nYeah, because pgoutput or for that matter even test_decoding doesn't\nacquire a lock on user catalog tables.\n\n> Are these the scenario you have in mind,\n> if not please let me know for the missing steps.\n> I would like to reproduce the scenario and write a patch to fix this.\n>\n\nI don't think we can reproduce it with core plugins as they don't lock\nuser catalog tables. We either need to write a minimal decoding plugin\nwhere we acquire a lock (maybe share lock) on the user catalog table\nor hack test_decoding/pgoutput to take such a lock.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 13 May 2021 15:51:13 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Tue, Apr 20, 2021 at 8:36 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Sat, Apr 17, 2021 at 1:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Fri, Apr 16, 2021 at 11:24 PM Andres Freund <andres@anarazel.de> wrote:> > This made me take a brief look at pgoutput.c - maybe I am missing\n> > > something, but how is the following not a memory leak?\n> > >\n> > > static void\n> > > maybe_send_schema(LogicalDecodingContext *ctx,\n> > > ReorderBufferTXN *txn, ReorderBufferChange *change,\n> > > Relation relation, RelationSyncEntry *relentry)\n> > > {\n> > > ...\n> > > /* Map must live as long as the session does. */\n> > > oldctx = MemoryContextSwitchTo(CacheMemoryContext);\n> > > relentry->map = convert_tuples_by_name(CreateTupleDescCopy(indesc),\n> > > CreateTupleDescCopy(outdesc));\n> > > MemoryContextSwitchTo(oldctx);\n> > > send_relation_and_attrs(ancestor, xid, ctx);\n> > > RelationClose(ancestor);\n> > >\n> > > If - and that's common - convert_tuples_by_name() won't have to do\n> > > anything, the copied tuple descs will be permanently leaked.\n> > >\n> >\n> > I also think this is a permanent leak. I think we need to free all the\n> > memory associated with this map on the invalidation of this particular\n> > relsync entry (basically in rel_sync_cache_relation_cb).\n>\n> I agree there's a problem here.\n>\n> Back in:\n>\n> https://www.postgresql.org/message-id/CA%2BHiwqEeU19iQgjN6HF1HTPU0L5%2BJxyS5CmxaOVGNXBAfUY06Q%40mail.gmail.com\n>\n> I had proposed to move the map creation from maybe_send_schema() to\n> get_rel_sync_entry(), mainly because the latter is where I realized it\n> belongs, though a bit too late.\n>\n\nIt seems in get_rel_sync_entry, it will only build the map again when\nthere is any invalidation in publication_rel. Don't we need to build\nit after any DDL on the relation itself? I haven't tried this with a\ntest so I might be missing something. Also, don't we need to free the\nentire map as suggested by me?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 13 May 2021 16:13:27 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Thursday, May 13, 2021 7:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Tue, Apr 20, 2021 at 8:36 AM Amit Langote <amitlangote09@gmail.com>\r\n> wrote:\r\n> > Back in:\r\n> https://www.postgresql.org/message-id/CA%2BHiwqEeU19iQgjN6HF1HTP\r\n> U0L5%2\r\n> > BJxyS5CmxaOVGNXBAfUY06Q%40mail.gmail.com\r\n> >\r\n> > I had proposed to move the map creation from maybe_send_schema() to\r\n> > get_rel_sync_entry(), mainly because the latter is where I realized it\r\n> > belongs, though a bit too late.\r\n> \r\n> It seems in get_rel_sync_entry, it will only build the map again when there is\r\n> any invalidation in publication_rel. Don't we need to build it after any DDL on\r\n> the relation itself? I haven't tried this with a test so I might be missing\r\n> something.\r\nYeah, the patch not only tries to address the memory leak\r\nbut also changes the timing (condition) to call convert_tuples_by_name.\r\nThis is because the patch placed the function within a condition of !entry->replicate_valid in get_rel_sync_entry.\r\nOTOH, OSS HEAD calls it based on RelationSyncEntry's schema_sent in maybe_send_schema.\r\n\r\nThe two flags (replicate_valid and schema_sent) are reset at different timing somehow.\r\nInvalidateSystemCaches resets both flags but schema_send is also reset by LocalExecuteInvalidationMessage\r\nwhile replicate_valid is reset by CallSyscacheCallbacks.\r\n\r\nIIUC, InvalidateSystemCaches, which applies to both flags, is called\r\nwhen a transaction starts, via AtStart_Cache and when a table lock is taken via LockRelationOid, etc.\r\nAccordingly, I think we can notice changes after any DDL on the relation.\r\n\r\nBut, as for the different timing, we need to know the impact of the change accurately.\r\nLocalExecuteInvalidationMessage is called from functions in reorderbuffer\r\n(e.g. ReorderBufferImmediateInvalidation, ReorderBufferExecuteInvalidations).\r\nThis seems to me that changing the condition by the patch\r\nreduces the chance of the reorderbuffer's proactive reset of\r\nthe flag which leads to rebuild the map in the end.\r\n\r\nLangote-san, could you please explain this perspective ?\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Fri, 14 May 2021 02:19:53 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Thu, May 13, 2021 at 7:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Tue, Apr 20, 2021 at 8:36 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Sat, Apr 17, 2021 at 1:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > On Fri, Apr 16, 2021 at 11:24 PM Andres Freund <andres@anarazel.de> wrote:> > This made me take a brief look at pgoutput.c - maybe I am missing\n> > > > something, but how is the following not a memory leak?\n> > > >\n> > > > static void\n> > > > maybe_send_schema(LogicalDecodingContext *ctx,\n> > > > ReorderBufferTXN *txn, ReorderBufferChange *change,\n> > > > Relation relation, RelationSyncEntry *relentry)\n> > > > {\n> > > > ...\n> > > > /* Map must live as long as the session does. */\n> > > > oldctx = MemoryContextSwitchTo(CacheMemoryContext);\n> > > > relentry->map = convert_tuples_by_name(CreateTupleDescCopy(indesc),\n> > > > CreateTupleDescCopy(outdesc));\n> > > > MemoryContextSwitchTo(oldctx);\n> > > > send_relation_and_attrs(ancestor, xid, ctx);\n> > > > RelationClose(ancestor);\n> > > >\n> > > > If - and that's common - convert_tuples_by_name() won't have to do\n> > > > anything, the copied tuple descs will be permanently leaked.\n> > > >\n> > >\n> > > I also think this is a permanent leak. I think we need to free all the\n> > > memory associated with this map on the invalidation of this particular\n> > > relsync entry (basically in rel_sync_cache_relation_cb).\n> >\n> > I agree there's a problem here.\n> >\n> > Back in:\n> >\n> > https://www.postgresql.org/message-id/CA%2BHiwqEeU19iQgjN6HF1HTPU0L5%2BJxyS5CmxaOVGNXBAfUY06Q%40mail.gmail.com\n> >\n> > I had proposed to move the map creation from maybe_send_schema() to\n> > get_rel_sync_entry(), mainly because the latter is where I realized it\n> > belongs, though a bit too late.\n> >\n>\n> It seems in get_rel_sync_entry, it will only build the map again when\n> there is any invalidation in publication_rel. Don't we need to build\n> it after any DDL on the relation itself? I haven't tried this with a\n> test so I might be missing something.\n\nThat's a good point, I didn't really think that through. So,\nrel_sync_cache_relation_cb(), that gets called when the published\ntable's relcache is invalidated, only resets schema_sent but not\nreplicate_valid. The latter, as you said, is reset by\nrel_sync_cache_publication_cb() when a pg_publication syscache\ninvalidation occurs. So with the patch, it's possible for the map to\nnot be recreated, even when it should, if for example DDL changes the\ntable's TupleDesc.\n\nI have put the map-creation code back into maybe_send_schema() in the\nattached updated patch, updated some comments related to the map, and\nadded a test case that would fail with the previous patch (due to\nmoving map-creation code into get_rel_sync_entry() that is) but\nsucceeds with the updated patch.\n\n> Also, don't we need to free the\n> entire map as suggested by me?\n\nYes, I had missed that too. Addressed in the updated patch.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 14 May 2021 16:14:36 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "Takamichi-san,\n\nOn Fri, May 14, 2021 at 11:19 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n> On Thursday, May 13, 2021 7:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Tue, Apr 20, 2021 at 8:36 AM Amit Langote <amitlangote09@gmail.com>\n> > wrote:\n> > > Back in:\n> > https://www.postgresql.org/message-id/CA%2BHiwqEeU19iQgjN6HF1HTP\n> > U0L5%2\n> > > BJxyS5CmxaOVGNXBAfUY06Q%40mail.gmail.com\n> > >\n> > > I had proposed to move the map creation from maybe_send_schema() to\n> > > get_rel_sync_entry(), mainly because the latter is where I realized it\n> > > belongs, though a bit too late.\n> >\n> > It seems in get_rel_sync_entry, it will only build the map again when there is\n> > any invalidation in publication_rel. Don't we need to build it after any DDL on\n> > the relation itself? I haven't tried this with a test so I might be missing\n> > something.\n> Yeah, the patch not only tries to address the memory leak\n> but also changes the timing (condition) to call convert_tuples_by_name.\n> This is because the patch placed the function within a condition of !entry->replicate_valid in get_rel_sync_entry.\n> OTOH, OSS HEAD calls it based on RelationSyncEntry's schema_sent in maybe_send_schema.\n>\n> The two flags (replicate_valid and schema_sent) are reset at different timing somehow.\n> InvalidateSystemCaches resets both flags but schema_send is also reset by LocalExecuteInvalidationMessage\n> while replicate_valid is reset by CallSyscacheCallbacks.\n>\n> IIUC, InvalidateSystemCaches, which applies to both flags, is called\n> when a transaction starts, via AtStart_Cache and when a table lock is taken via LockRelationOid, etc.\n> Accordingly, I think we can notice changes after any DDL on the relation.\n>\n> But, as for the different timing, we need to know the impact of the change accurately.\n> LocalExecuteInvalidationMessage is called from functions in reorderbuffer\n> (e.g. ReorderBufferImmediateInvalidation, ReorderBufferExecuteInvalidations).\n> This seems to me that changing the condition by the patch\n> reduces the chance of the reorderbuffer's proactive reset of\n> the flag which leads to rebuild the map in the end.\n>\n> Langote-san, could you please explain this perspective ?\n\nPlease check the reply I just sent. In summary, moving map-creation\ninto get_rel_sync_entry() was not correct.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 14 May 2021 16:16:05 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Thursday, May 13, 2021 7:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Thu, May 13, 2021 at 11:15 AM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > I tried the following scenarios for trying to reproduce this.\r\n> > Scenario2:\r\n> > (1) set up 1 publisher and 1 subscriber\r\n> > (2) create table with user_catalog_table = true on the pub\r\n> > (3) insert some data to this table\r\n> > (4) create publication for the table on the pub\r\n> > (5) create table with user_catalog_table = true on the sub\r\n> > (6) create subscription on the sub\r\n> > (7) add synchronous_standby_names to publisher's configuration and\r\n> > restart the pub\r\n> > (8) have a session to truncate the user_catalog_table on the pub\r\n> >\r\n> > Scenario 2 was successful.\r\n> \r\n> Yeah, because pgoutput or for that matter even test_decoding doesn't\r\n> acquire a lock on user catalog tables.\r\n> \r\n> > Are these the scenario you have in mind, if not please let me know for\r\n> > the missing steps.\r\n> > I would like to reproduce the scenario and write a patch to fix this.\r\n> \r\n> I don't think we can reproduce it with core plugins as they don't lock user\r\n> catalog tables. \r\nOK. My current understanding about how the deadlock happens is below.\r\n\r\n1. TRUNCATE command is performed on user_catalog_table.\r\n2. TRUNCATE command locks the table and index with ACCESS EXCLUSIVE LOCK.\r\n3. TRUNCATE waits for the subscriber's synchronization\r\n\twhen synchronous_standby_names is set.\r\n4. Here, the walsender stops, *if* it tries to acquire a lock on the user_catalog_table\r\n\tbecause the table where it wants to see is locked by the TRUNCATE already. \r\n\r\nIf this is right, we need to go back to a little bit higher level discussion,\r\nsince whether we should hack any plugin to simulate the deadlock caused by user_catalog_table reference\r\nwith locking depends on the assumption if the plugin takes a lock on the user_catalog_table or not.\r\nIn other words, if the plugin does read only access to that type of table with no lock\r\n(by RelationIdGetRelation for example ?), the deadlock concern disappears and we don't\r\nneed to add anything to plugin sides, IIUC.\r\n\r\nHere, we haven't gotten any response about whether output plugin takes (should take)\r\nthe lock on the user_catalog_table. But, I would like to make a consensus\r\nabout this point before the implementation.\r\n\r\nBy the way, Amit-san already mentioned the main reason of this\r\nis that we allow decoding of TRUNCATE operation for user_catalog_table in synchronous mode.\r\nThe choices are provided by Amit-san already in the past email in [1].\r\n(1) disallow decoding of TRUNCATE operation for user_catalog_tables\r\n(2) disallow decoding of any operation for user_catalog_tables like system catalog tables\r\n\r\nYet, I'm not sure if either option solves the deadlock concern completely.\r\nIf application takes an ACCESS EXCLUSIVE lock by LOCK command (not by TRUNCATE !)\r\non the user_catalog_table in a transaction, and if the plugin tries to take a lock on it,\r\nI think the deadlock happens. Of course, having a consensus that the plugin takes no lock at all\r\nwould remove this concern, though. \r\n\r\nLike this, I'd like to discuss those two items in question together at first.\r\n* the plugin should take a lock on user_catalog_table or not\r\n* the range of decoding related to user_catalog_table\r\n\r\nTo me, taking no lock on the user_catalog_table from plugin is fine because\r\nwe have historical snapshot mechanism, which doesn't produce deadlock in any combination\r\neven when application executes a LOCK command for ACCESS EXCLUSIVE.\r\nIn addition, I agree with the idea that we don't decode any operation on user_catalog_table\r\nand have better alignment with usual system catalogs.\r\n\r\nThoughts ?\r\n\r\n[1] - https://www.postgresql.org/message-id/CAA4eK1LP8xTysPEMEBYAZ%3D6KoMWfjyf0gzF-9Bp%3DSgVFvYQDVw%40mail.gmail.com\r\n\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Fri, 14 May 2021 08:50:13 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Fri, May 14, 2021 at 12:44 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Thu, May 13, 2021 at 7:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > Also, don't we need to free the\n> > entire map as suggested by me?\n>\n> Yes, I had missed that too. Addressed in the updated patch.\n>\n\n+ relentry->map = convert_tuples_by_name(indesc, outdesc);\n+ if (relentry->map == NULL)\n+ {\n+ /* Map not necessary, so free the TupleDescs too. */\n+ FreeTupleDesc(indesc);\n+ FreeTupleDesc(outdesc);\n+ }\n\nI think the patch frees these descriptors when the map is NULL but not\notherwise because free_conversion_map won't free these descriptors.\nBTW, have you tried this patch in back branches because I think we\nshould backpatch this fix?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 17 May 2021 14:43:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Fri, May 14, 2021 at 2:20 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Thursday, May 13, 2021 7:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I don't think we can reproduce it with core plugins as they don't lock user\n> > catalog tables.\n> OK. My current understanding about how the deadlock happens is below.\n>\n> 1. TRUNCATE command is performed on user_catalog_table.\n> 2. TRUNCATE command locks the table and index with ACCESS EXCLUSIVE LOCK.\n> 3. TRUNCATE waits for the subscriber's synchronization\n> when synchronous_standby_names is set.\n> 4. Here, the walsender stops, *if* it tries to acquire a lock on the user_catalog_table\n> because the table where it wants to see is locked by the TRUNCATE already.\n>\n> If this is right,\n>\n\nYeah, the above steps are correct, so if we take a lock on\nuser_catalog_table when walsender is processing the WAL, it would lead\nto a problem.\n\n> we need to go back to a little bit higher level discussion,\n> since whether we should hack any plugin to simulate the deadlock caused by user_catalog_table reference\n> with locking depends on the assumption if the plugin takes a lock on the user_catalog_table or not.\n> In other words, if the plugin does read only access to that type of table with no lock\n> (by RelationIdGetRelation for example ?), the deadlock concern disappears and we don't\n> need to add anything to plugin sides, IIUC.\n>\n\nTrue, if the plugin doesn't acquire any lock on user_catalog_table,\nthen it is fine but we don't prohibit plugins to acquire locks on\nuser_catalog_tables. This is similar to system catalogs, the plugins\nand decoding code do acquire lock on those.\n\n> Here, we haven't gotten any response about whether output plugin takes (should take)\n> the lock on the user_catalog_table. But, I would like to make a consensus\n> about this point before the implementation.\n>\n> By the way, Amit-san already mentioned the main reason of this\n> is that we allow decoding of TRUNCATE operation for user_catalog_table in synchronous mode.\n> The choices are provided by Amit-san already in the past email in [1].\n> (1) disallow decoding of TRUNCATE operation for user_catalog_tables\n> (2) disallow decoding of any operation for user_catalog_tables like system catalog tables\n>\n> Yet, I'm not sure if either option solves the deadlock concern completely.\n> If application takes an ACCESS EXCLUSIVE lock by LOCK command (not by TRUNCATE !)\n> on the user_catalog_table in a transaction, and if the plugin tries to take a lock on it,\n> I think the deadlock happens. Of course, having a consensus that the plugin takes no lock at all\n> would remove this concern, though.\n>\n\nThis is true for system catalogs as well. See the similar report [1]\n\n> Like this, I'd like to discuss those two items in question together at first.\n> * the plugin should take a lock on user_catalog_table or not\n> * the range of decoding related to user_catalog_table\n>\n> To me, taking no lock on the user_catalog_table from plugin is fine\n>\n\nWe allow taking locks on system catalogs, so why prohibit\nuser_catalog_tables? However, I agree that if we want plugins to\nacquire the lock on user_catalog_tables then we should either prohibit\ndecoding of such relations or do something else to avoid deadlock\nhazards.\n\n[1] - https://www.postgresql.org/message-id/CALDaNm1UB%3D%3DgL9Poad4ETjfcyGdJBphWEzEZocodnBd--kJpVw%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 17 May 2021 15:14:33 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Mon, May 17, 2021 at 6:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Fri, May 14, 2021 at 12:44 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Thu, May 13, 2021 at 7:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > Also, don't we need to free the\n> > > entire map as suggested by me?\n> >\n> > Yes, I had missed that too. Addressed in the updated patch.\n> >\n>\n> + relentry->map = convert_tuples_by_name(indesc, outdesc);\n> + if (relentry->map == NULL)\n> + {\n> + /* Map not necessary, so free the TupleDescs too. */\n> + FreeTupleDesc(indesc);\n> + FreeTupleDesc(outdesc);\n> + }\n>\n> I think the patch frees these descriptors when the map is NULL but not\n> otherwise because free_conversion_map won't free these descriptors.\n\nYou're right. I have fixed that by making the callback free the\nTupleDescs explicitly.\n\n> BTW, have you tried this patch in back branches because I think we\n> should backpatch this fix?\n\nI have created a version of the patch for v13, the only older release\nthat has this code, and can see that tests, including the newly added\none, pass.\n\nBoth patches are attached.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 17 May 2021 18:52:20 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Monday, May 17, 2021 6:52 PM Amit Langote <amitlangote09@gmail.com> wrote:\r\n> On Mon, May 17, 2021 at 6:13 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > On Fri, May 14, 2021 at 12:44 PM Amit Langote\r\n> <amitlangote09@gmail.com> wrote:\r\n> > > On Thu, May 13, 2021 at 7:43 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > > Also, don't we need to free the\r\n> > > > entire map as suggested by me?\r\n> > >\r\n> > > Yes, I had missed that too. Addressed in the updated patch.\r\n> > >\r\n> >\r\n> > + relentry->map = convert_tuples_by_name(indesc, outdesc);\r\n> > + if (relentry->map == NULL)\r\n> > + {\r\n> > + /* Map not necessary, so free the TupleDescs too. */\r\n> > + FreeTupleDesc(indesc); FreeTupleDesc(outdesc); }\r\n> >\r\n> > I think the patch frees these descriptors when the map is NULL but not\r\n> > otherwise because free_conversion_map won't free these descriptors.\r\n> \r\n> You're right. I have fixed that by making the callback free the TupleDescs\r\n> explicitly.\r\nThis fix looks correct. Also, RT of v3 didn't fail.\r\n\r\nFurther, I've checked the point of view of the newly added tests.\r\nAs you said that with v1's contents, the test of v2 failed but\r\nwe are fine with v2 and v3, which proves that we adjust DDL in a right way.\r\n\r\n> > BTW, have you tried this patch in back branches because I think we\r\n> > should backpatch this fix?\r\n>\r\n> I have created a version of the patch for v13, the only older release that has\r\n> this code, and can see that tests, including the newly added one, pass.\r\n> \r\n> Both patches are attached.\r\nThe patch for PG13 can be applied to it cleanly and the RT succeeded.\r\n\r\nI have few really minor comments on your comments in the patch.\r\n\r\n(1) schema_sent's comment\r\n\r\n@@ -94,7 +94,8 @@ typedef struct RelationSyncEntry\r\n\r\n /*\r\n * Did we send the schema? If ancestor relid is set, its schema must also\r\n- * have been sent for this to be true.\r\n+ * have been sent and the map to convert the relation's tuples into the\r\n+ * ancestor's format created before this can be set to be true.\r\n */\r\n bool schema_sent;\r\n List *streamed_txns; /* streamed toplevel transactions with this\r\n\r\n\r\nI suggest to insert a comma between 'created' and 'before'\r\nbecause the sentence is a bit long and confusing.\r\n\r\nOr, I thought another comment idea for this part,\r\nbecause the original one doesn't care about the cycle of the reset.\r\n\r\n\"To avoid repetition to send the schema, this is set true after its first transmission.\r\nReset when any change of the relation definition is possible. If ancestor relid is set,\r\nits schema must have also been sent while the map to convert the relation's tuples into\r\nthe ancestor's format created, before this flag is set true.\"\r\n\r\n(2) comment in rel_sync_cache_relation_cb()\r\n\r\n@@ -1190,13 +1208,25 @@ rel_sync_cache_relation_cb(Datum arg, Oid relid)\r\n HASH_FIND, NULL);\r\n\r\n /*\r\n- * Reset schema sent status as the relation definition may have changed.\r\n+ * Reset schema sent status as the relation definition may have changed,\r\n+ * also freeing any objects that depended on the earlier definition.\r\n\r\nHow about divide this sentence into two sentences like\r\n\"Reset schema sent status as the relation definition may have changed.\r\nAlso, free any objects that depended on the earlier definition.\"\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Mon, 17 May 2021 12:45:16 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Mon, May 17, 2021 at 9:45 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n> On Monday, May 17, 2021 6:52 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Both patches are attached.\n> The patch for PG13 can be applied to it cleanly and the RT succeeded.\n>\n> I have few really minor comments on your comments in the patch.\n>\n> (1) schema_sent's comment\n>\n> @@ -94,7 +94,8 @@ typedef struct RelationSyncEntry\n>\n> /*\n> * Did we send the schema? If ancestor relid is set, its schema must also\n> - * have been sent for this to be true.\n> + * have been sent and the map to convert the relation's tuples into the\n> + * ancestor's format created before this can be set to be true.\n> */\n> bool schema_sent;\n> List *streamed_txns; /* streamed toplevel transactions with this\n>\n>\n> I suggest to insert a comma between 'created' and 'before'\n> because the sentence is a bit long and confusing.\n>\n> Or, I thought another comment idea for this part,\n> because the original one doesn't care about the cycle of the reset.\n>\n> \"To avoid repetition to send the schema, this is set true after its first transmission.\n> Reset when any change of the relation definition is possible. If ancestor relid is set,\n> its schema must have also been sent while the map to convert the relation's tuples into\n> the ancestor's format created, before this flag is set true.\"\n>\n> (2) comment in rel_sync_cache_relation_cb()\n>\n> @@ -1190,13 +1208,25 @@ rel_sync_cache_relation_cb(Datum arg, Oid relid)\n> HASH_FIND, NULL);\n>\n> /*\n> - * Reset schema sent status as the relation definition may have changed.\n> + * Reset schema sent status as the relation definition may have changed,\n> + * also freeing any objects that depended on the earlier definition.\n>\n> How about divide this sentence into two sentences like\n> \"Reset schema sent status as the relation definition may have changed.\n> Also, free any objects that depended on the earlier definition.\"\n\nThanks for reading it over. I have revised comments in a way that\nhopefully addresses your concerns.\n\nWhile doing so, it occurred to me (maybe not for the first time) that\nwe are *unnecessarily* doing send_relation_and_attrs() for a relation\nif the changes will be published using an ancestor's schema. In that\ncase, sending only the ancestor's schema suffices AFAICS. Changing\nthe code that way doesn't break any tests. I propose that we fix that\ntoo.\n\nUpdated patches attached. I've added a commit message to both patches.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 18 May 2021 15:30:18 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Monday, May 17, 2021 6:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Fri, May 14, 2021 at 2:20 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Thursday, May 13, 2021 7:21 PM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> > > I don't think we can reproduce it with core plugins as they don't\r\n> > > lock user catalog tables.\r\n> > OK. My current understanding about how the deadlock happens is below.\r\n> >\r\n> > 1. TRUNCATE command is performed on user_catalog_table.\r\n> > 2. TRUNCATE command locks the table and index with ACCESS\r\n> EXCLUSIVE LOCK.\r\n> > 3. TRUNCATE waits for the subscriber's synchronization\r\n> > when synchronous_standby_names is set.\r\n> > 4. Here, the walsender stops, *if* it tries to acquire a lock on the\r\n> user_catalog_table\r\n> > because the table where it wants to see is locked by the\r\n> TRUNCATE already.\r\n> >\r\n> > If this is right,\r\n> >\r\n> \r\n> Yeah, the above steps are correct, so if we take a lock on user_catalog_table\r\n> when walsender is processing the WAL, it would lead to a problem.\r\n> \r\n> > we need to go back to a little bit higher level discussion, since\r\n> > whether we should hack any plugin to simulate the deadlock caused by\r\n> > user_catalog_table reference with locking depends on the assumption if\r\n> the plugin takes a lock on the user_catalog_table or not.\r\n> > In other words, if the plugin does read only access to that type of\r\n> > table with no lock (by RelationIdGetRelation for example ?), the\r\n> > deadlock concern disappears and we don't need to add anything to plugin\r\n> sides, IIUC.\r\n> >\r\n> \r\n> True, if the plugin doesn't acquire any lock on user_catalog_table, then it is\r\n> fine but we don't prohibit plugins to acquire locks on user_catalog_tables.\r\n> This is similar to system catalogs, the plugins and decoding code do acquire\r\n> lock on those.\r\nThanks for sharing this. I'll take the idea\r\nthat plugin can take a lock on user_catalog_table into account.\r\n\r\n\r\n> > Here, we haven't gotten any response about whether output plugin takes\r\n> > (should take) the lock on the user_catalog_table. But, I would like to\r\n> > make a consensus about this point before the implementation.\r\n> >\r\n> > By the way, Amit-san already mentioned the main reason of this is that\r\n> > we allow decoding of TRUNCATE operation for user_catalog_table in\r\n> synchronous mode.\r\n> > The choices are provided by Amit-san already in the past email in [1].\r\n> > (1) disallow decoding of TRUNCATE operation for user_catalog_tables\r\n> > (2) disallow decoding of any operation for user_catalog_tables like\r\n> > system catalog tables\r\n> >\r\n> > Yet, I'm not sure if either option solves the deadlock concern completely.\r\n> > If application takes an ACCESS EXCLUSIVE lock by LOCK command (not\r\n> by\r\n> > TRUNCATE !) on the user_catalog_table in a transaction, and if the\r\n> > plugin tries to take a lock on it, I think the deadlock happens. Of\r\n> > course, having a consensus that the plugin takes no lock at all would\r\n> remove this concern, though.\r\n> >\r\n> \r\n> This is true for system catalogs as well. See the similar report [1]\r\n> \r\n> > Like this, I'd like to discuss those two items in question together at first.\r\n> > * the plugin should take a lock on user_catalog_table or not\r\n> > * the range of decoding related to user_catalog_table\r\n> >\r\n> > To me, taking no lock on the user_catalog_table from plugin is fine\r\n> >\r\n> \r\n> We allow taking locks on system catalogs, so why prohibit\r\n> user_catalog_tables? However, I agree that if we want plugins to acquire the\r\n> lock on user_catalog_tables then we should either prohibit decoding of such\r\n> relations or do something else to avoid deadlock hazards.\r\nOK.\r\n\r\nAlthough we have not concluded the range of logical decoding of user_catalog_table\r\n(like we should exclude TRUNCATE command only or all operations on that type of table),\r\nI'm worried that disallowing the logical decoding of user_catalog_table produces\r\nthe deadlock still. It's because disabling it by itself does not affect the\r\nlock taken by TRUNCATE command. What I have in mind is an example below.\r\n\r\n(1) plugin (e.g. pgoutput) is designed to take a lock on user_catalog_table.\r\n(2) logical replication is set up in synchronous mode.\r\n(3) TRUNCATE command takes an access exclusive lock on the user_catalog_table.\r\n(4) This time, we don't do anything for the TRUNCATE decoding.\r\n(5) the plugin tries to take a lock on the truncated table\r\n\tbut, it can't due to the lock by TRUNCATE command.\r\n\r\nI was not sure that the place where the plugin takes the lock is in truncate_cb\r\nor somewhere else not directly related to decoding of the user_catalog_table itself,\r\nso I might be wrong. However, in this case,\r\nthe solution would be not disabling the decoding of user_catalog_table\r\nbut prohibiting TRUNCATE command on user_catalog_table in synchronous_mode.\r\nIf this is true, I need to extend an output plugin and simulate the deadlock first\r\nand remove it by fixing the TRUNCATE side. Thoughts ?\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Tue, 18 May 2021 07:59:36 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Tue, May 18, 2021 at 1:29 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Monday, May 17, 2021 6:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > We allow taking locks on system catalogs, so why prohibit\n> > user_catalog_tables? However, I agree that if we want plugins to acquire the\n> > lock on user_catalog_tables then we should either prohibit decoding of such\n> > relations or do something else to avoid deadlock hazards.\n> OK.\n>\n> Although we have not concluded the range of logical decoding of user_catalog_table\n> (like we should exclude TRUNCATE command only or all operations on that type of table),\n> I'm worried that disallowing the logical decoding of user_catalog_table produces\n> the deadlock still. It's because disabling it by itself does not affect the\n> lock taken by TRUNCATE command. What I have in mind is an example below.\n>\n> (1) plugin (e.g. pgoutput) is designed to take a lock on user_catalog_table.\n> (2) logical replication is set up in synchronous mode.\n> (3) TRUNCATE command takes an access exclusive lock on the user_catalog_table.\n> (4) This time, we don't do anything for the TRUNCATE decoding.\n> (5) the plugin tries to take a lock on the truncated table\n> but, it can't due to the lock by TRUNCATE command.\n>\n\nIf you skip decoding of truncate then we won't invoke plugin API so\nstep 5 will be skipped.\n\n> I was not sure that the place where the plugin takes the lock is in truncate_cb\n> or somewhere else not directly related to decoding of the user_catalog_table itself,\n> so I might be wrong. However, in this case,\n> the solution would be not disabling the decoding of user_catalog_table\n> but prohibiting TRUNCATE command on user_catalog_table in synchronous_mode.\n> If this is true, I need to extend an output plugin and simulate the deadlock first\n> and remove it by fixing the TRUNCATE side. Thoughts ?\n>\n\nI suggest not spending too much time reproducing this because it is\nquite clear that it will lead to deadlock if the plugin acquires lock\non user_catalog_table and we allow decoding of truncate. But if you\nwant to see how that happens you can try as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 18 May 2021 17:29:32 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Tue, May 18, 2021 at 5:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, May 18, 2021 at 1:29 PM osumi.takamichi@fujitsu.com\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > On Monday, May 17, 2021 6:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > We allow taking locks on system catalogs, so why prohibit\n> > > user_catalog_tables? However, I agree that if we want plugins to acquire the\n> > > lock on user_catalog_tables then we should either prohibit decoding of such\n> > > relations or do something else to avoid deadlock hazards.\n> > OK.\n> >\n> > Although we have not concluded the range of logical decoding of user_catalog_table\n> > (like we should exclude TRUNCATE command only or all operations on that type of table),\n> > I'm worried that disallowing the logical decoding of user_catalog_table produces\n> > the deadlock still. It's because disabling it by itself does not affect the\n> > lock taken by TRUNCATE command. What I have in mind is an example below.\n> >\n> > (1) plugin (e.g. pgoutput) is designed to take a lock on user_catalog_table.\n> > (2) logical replication is set up in synchronous mode.\n> > (3) TRUNCATE command takes an access exclusive lock on the user_catalog_table.\n> > (4) This time, we don't do anything for the TRUNCATE decoding.\n> > (5) the plugin tries to take a lock on the truncated table\n> > but, it can't due to the lock by TRUNCATE command.\n> >\n>\n> If you skip decoding of truncate then we won't invoke plugin API so\n> step 5 will be skipped.\n>\n\nI think you were right here even if skip step-4, the plugin might take\na lock on user_catalog_table for something else. I am not sure but I\nthink we should prohibit truncate on user_catalog_tables as we\nprohibit truncate on system catalog tables (see below [1]) if we want\nplugin to lock them, otherwise, as you said it might lead to deadlock.\nFor the matter, I think we should once check all other operations\nwhere we can take an exclusive lock on [user]_catalog_table, say\nCluster command, and compare the behavior of same on system catalog\ntables.\n\n[1]\npostgres=# truncate pg_class;\nERROR: permission denied: \"pg_class\" is a system catalog\npostgres=# cluster pg_class;\nERROR: there is no previously clustered index for table \"pg_class\"\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 19 May 2021 07:59:38 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Wed, May 19, 2021 at 7:59 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, May 18, 2021 at 5:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, May 18, 2021 at 1:29 PM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > > On Monday, May 17, 2021 6:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > We allow taking locks on system catalogs, so why prohibit\n> > > > user_catalog_tables? However, I agree that if we want plugins to acquire the\n> > > > lock on user_catalog_tables then we should either prohibit decoding of such\n> > > > relations or do something else to avoid deadlock hazards.\n> > > OK.\n> > >\n> > > Although we have not concluded the range of logical decoding of user_catalog_table\n> > > (like we should exclude TRUNCATE command only or all operations on that type of table),\n> > > I'm worried that disallowing the logical decoding of user_catalog_table produces\n> > > the deadlock still. It's because disabling it by itself does not affect the\n> > > lock taken by TRUNCATE command. What I have in mind is an example below.\n> > >\n> > > (1) plugin (e.g. pgoutput) is designed to take a lock on user_catalog_table.\n> > > (2) logical replication is set up in synchronous mode.\n> > > (3) TRUNCATE command takes an access exclusive lock on the user_catalog_table.\n> > > (4) This time, we don't do anything for the TRUNCATE decoding.\n> > > (5) the plugin tries to take a lock on the truncated table\n> > > but, it can't due to the lock by TRUNCATE command.\n> > >\n> >\n> > If you skip decoding of truncate then we won't invoke plugin API so\n> > step 5 will be skipped.\n> >\n>\n> I think you were right here even if skip step-4, the plugin might take\n> a lock on user_catalog_table for something else. I am not sure but I\n> think we should prohibit truncate on user_catalog_tables as we\n> prohibit truncate on system catalog tables (see below [1]) if we want\n> plugin to lock them, otherwise, as you said it might lead to deadlock.\n> For the matter, I think we should once check all other operations\n> where we can take an exclusive lock on [user]_catalog_table, say\n> Cluster command, and compare the behavior of same on system catalog\n> tables.\n>\n> [1]\n> postgres=# truncate pg_class;\n> ERROR: permission denied: \"pg_class\" is a system catalog\n> postgres=# cluster pg_class;\n> ERROR: there is no previously clustered index for table \"pg_class\"\n>\n\nPlease ignore the cluster command as we need to use 'using index' with\nthat command to make it successful. I just want to show the truncate\ncommand behavior for which you have asked the question.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 19 May 2021 08:03:09 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Wednesday, May 19, 2021 11:33 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Wed, May 19, 2021 at 7:59 AM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Tue, May 18, 2021 at 5:29 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > On Tue, May 18, 2021 at 1:29 PM osumi.takamichi@fujitsu.com\r\n> > > <osumi.takamichi@fujitsu.com> wrote:\r\n> > > >\r\n> > > > On Monday, May 17, 2021 6:45 PM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> > > > >\r\n> > > > > We allow taking locks on system catalogs, so why prohibit\r\n> > > > > user_catalog_tables? However, I agree that if we want plugins to\r\n> > > > > acquire the lock on user_catalog_tables then we should either\r\n> > > > > prohibit decoding of such relations or do something else to avoid\r\n> deadlock hazards.\r\n> > > > OK.\r\n> > > >\r\n> > > > Although we have not concluded the range of logical decoding of\r\n> > > > user_catalog_table (like we should exclude TRUNCATE command only\r\n> > > > or all operations on that type of table), I'm worried that\r\n> > > > disallowing the logical decoding of user_catalog_table produces\r\n> > > > the deadlock still. It's because disabling it by itself does not affect the\r\n> lock taken by TRUNCATE command. What I have in mind is an example\r\n> below.\r\n> > > >\r\n> > > > (1) plugin (e.g. pgoutput) is designed to take a lock on\r\n> user_catalog_table.\r\n> > > > (2) logical replication is set up in synchronous mode.\r\n> > > > (3) TRUNCATE command takes an access exclusive lock on the\r\n> user_catalog_table.\r\n> > > > (4) This time, we don't do anything for the TRUNCATE decoding.\r\n> > > > (5) the plugin tries to take a lock on the truncated table\r\n> > > > but, it can't due to the lock by TRUNCATE command.\r\n> > > >\r\n> > >\r\n> > > If you skip decoding of truncate then we won't invoke plugin API so\r\n> > > step 5 will be skipped.\r\n> > >\r\n> >\r\n> > I think you were right here even if skip step-4, the plugin might take\r\n> > a lock on user_catalog_table for something else. \r\nYes, we can't know the exact place where the user wants to use the feature\r\nof user_catalog_table. Even if we imagine that the user skips\r\nthe truncate decoding (I imagined continuing and skipping a case in\r\nREORDER_BUFFER_CHANGE_TRUNCATE of pgoutput),\r\nit's possible that the user accesses it somewhere else for different purpose with lock.\r\n\r\n\r\n> I am not sure but I\r\n> > think we should prohibit truncate on user_catalog_tables as we\r\n> > prohibit truncate on system catalog tables (see below [1]) if we want\r\n> > plugin to lock them, otherwise, as you said it might lead to deadlock.\r\n> > For the matter, I think we should once check all other operations\r\n> > where we can take an exclusive lock on [user]_catalog_table, say\r\n> > Cluster command, and compare the behavior of same on system catalog\r\n> > tables.\r\n> >\r\n> > [1]\r\n> > postgres=# truncate pg_class;\r\n> > ERROR: permission denied: \"pg_class\" is a system catalog postgres=#\r\n> > cluster pg_class;\r\n> > ERROR: there is no previously clustered index for table \"pg_class\"\r\n> >\r\n> \r\n> Please ignore the cluster command as we need to use 'using index' with that\r\n> command to make it successful. I just want to show the truncate command\r\n> behavior for which you have asked the question.\r\nThank you so much for clarifying the direction.\r\nI agree with the changing the TRUNCATE side.\r\nI'll make a patch based on this.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Wed, 19 May 2021 02:58:01 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Wed, May 19, 2021 at 8:28 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Wednesday, May 19, 2021 11:33 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Wed, May 19, 2021 at 7:59 AM Amit Kapila <amit.kapila16@gmail.com>\n> > wrote:\n> > >\n> > > On Tue, May 18, 2021 at 5:29 PM Amit Kapila <amit.kapila16@gmail.com>\n> > wrote:\n> > > >\n> > > > On Tue, May 18, 2021 at 1:29 PM osumi.takamichi@fujitsu.com\n> > > > <osumi.takamichi@fujitsu.com> wrote:\n> > > > >\n> > > > > On Monday, May 17, 2021 6:45 PM Amit Kapila\n> > <amit.kapila16@gmail.com> wrote:\n> > > > > >\n> > > > > > We allow taking locks on system catalogs, so why prohibit\n> > > > > > user_catalog_tables? However, I agree that if we want plugins to\n> > > > > > acquire the lock on user_catalog_tables then we should either\n> > > > > > prohibit decoding of such relations or do something else to avoid\n> > deadlock hazards.\n> > > > > OK.\n> > > > >\n> > > > > Although we have not concluded the range of logical decoding of\n> > > > > user_catalog_table (like we should exclude TRUNCATE command only\n> > > > > or all operations on that type of table), I'm worried that\n> > > > > disallowing the logical decoding of user_catalog_table produces\n> > > > > the deadlock still. It's because disabling it by itself does not affect the\n> > lock taken by TRUNCATE command. What I have in mind is an example\n> > below.\n> > > > >\n> > > > > (1) plugin (e.g. pgoutput) is designed to take a lock on\n> > user_catalog_table.\n> > > > > (2) logical replication is set up in synchronous mode.\n> > > > > (3) TRUNCATE command takes an access exclusive lock on the\n> > user_catalog_table.\n> > > > > (4) This time, we don't do anything for the TRUNCATE decoding.\n> > > > > (5) the plugin tries to take a lock on the truncated table\n> > > > > but, it can't due to the lock by TRUNCATE command.\n> > > > >\n> > > >\n> > > > If you skip decoding of truncate then we won't invoke plugin API so\n> > > > step 5 will be skipped.\n> > > >\n> > >\n> > > I think you were right here even if skip step-4, the plugin might take\n> > > a lock on user_catalog_table for something else.\n> Yes, we can't know the exact place where the user wants to use the feature\n> of user_catalog_table. Even if we imagine that the user skips\n> the truncate decoding (I imagined continuing and skipping a case in\n> REORDER_BUFFER_CHANGE_TRUNCATE of pgoutput),\n> it's possible that the user accesses it somewhere else for different purpose with lock.\n>\n>\n> > I am not sure but I\n> > > think we should prohibit truncate on user_catalog_tables as we\n> > > prohibit truncate on system catalog tables (see below [1]) if we want\n> > > plugin to lock them, otherwise, as you said it might lead to deadlock.\n> > > For the matter, I think we should once check all other operations\n> > > where we can take an exclusive lock on [user]_catalog_table, say\n> > > Cluster command, and compare the behavior of same on system catalog\n> > > tables.\n> > >\n> > > [1]\n> > > postgres=# truncate pg_class;\n> > > ERROR: permission denied: \"pg_class\" is a system catalog postgres=#\n> > > cluster pg_class;\n> > > ERROR: there is no previously clustered index for table \"pg_class\"\n> > >\n> >\n> > Please ignore the cluster command as we need to use 'using index' with that\n> > command to make it successful. I just want to show the truncate command\n> > behavior for which you have asked the question.\n> Thank you so much for clarifying the direction.\n> I agree with the changing the TRUNCATE side.\n> I'll make a patch based on this.\n>\n\nIsn't it a better idea to start a new thread where you can summarize\nwhatever we have discussed here about user_catalog_tables? We might\nget the opinion from others about the behavior change you are\nproposing.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 19 May 2021 10:22:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Wednesday, May 19, 2021 1:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> > > I am not sure but I\r\n> > > > think we should prohibit truncate on user_catalog_tables as we\r\n> > > > prohibit truncate on system catalog tables (see below [1]) if we\r\n> > > > want plugin to lock them, otherwise, as you said it might lead to\r\n> deadlock.\r\n> > > > For the matter, I think we should once check all other operations\r\n> > > > where we can take an exclusive lock on [user]_catalog_table, say\r\n> > > > Cluster command, and compare the behavior of same on system\r\n> > > > catalog tables.\r\n> > > >\r\n> > > > [1]\r\n> > > > postgres=# truncate pg_class;\r\n> > > > ERROR: permission denied: \"pg_class\" is a system catalog\r\n> > > > postgres=# cluster pg_class;\r\n> > > > ERROR: there is no previously clustered index for table \"pg_class\"\r\n> > > >\r\n> > >\r\n> > > Please ignore the cluster command as we need to use 'using index'\r\n> > > with that command to make it successful. I just want to show the\r\n> > > truncate command behavior for which you have asked the question.\r\n> > Thank you so much for clarifying the direction.\r\n> > I agree with the changing the TRUNCATE side.\r\n> > I'll make a patch based on this.\r\n> >\r\n> \r\n> Isn't it a better idea to start a new thread where you can summarize whatever\r\n> we have discussed here about user_catalog_tables? We might get the opinion\r\n> from others about the behavior change you are proposing.\r\nYou are right. So, I've launched it with the patch for this.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Wed, 19 May 2021 10:35:30 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Tuesday, May 18, 2021 3:30 PM Amit Langote <amitlangote09@gmail.com> wrote:\r\n> While doing so, it occurred to me (maybe not for the first time) that we are\r\n> *unnecessarily* doing send_relation_and_attrs() for a relation if the changes\r\n> will be published using an ancestor's schema. In that case, sending only the\r\n> ancestor's schema suffices AFAICS. Changing the code that way doesn't\r\n> break any tests. I propose that we fix that too.\r\nI've analyzed this new change's validity.\r\nMy conclusion for this is that we don't have\r\nany bad impact from this, which means your additional fix is acceptable.\r\nI think this addition blurs the purpose of the patch a bit, though.\r\n\r\nWith the removal of the send_relation_and_attrs() of the patch,\r\nwe don't send one pair of LOGICAL_REP_MSG_TYPE('Y'),\r\nLOGICAL_REP_MSG_RELATION('R') message to the subscriber\r\nwhen we use ancestor. Therefore, we become\r\nnot to register or update type and relation for maybe_send_schema()'s\r\nargument 'relation' with the patch, in the case to use ancestor's schema.\r\nHowever, both the pgoutput_change() and pgoutput_truncate()\r\nhave conditions to check oids to send to the subscriber for any operations.\r\nAccordingly, the pair information for that argument 'relation'\r\naren't used on the subscriber in that case and we are fine.\r\n\r\nI'll comment on other minor things in another email.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Thu, 20 May 2021 08:58:59 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Tuesday, May 18, 2021 3:30 PM Amit Langote <amitlangote09@gmail.com> wrote:\r\n> On Mon, May 17, 2021 at 9:45 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> > On Monday, May 17, 2021 6:52 PM Amit Langote\r\n> <amitlangote09@gmail.com> wrote:\r\n> > > Both patches are attached.\r\n> > The patch for PG13 can be applied to it cleanly and the RT succeeded.\r\n> >\r\n> > I have few really minor comments on your comments in the patch.\r\n> >\r\n> > (1) schema_sent's comment\r\n> >\r\n> > @@ -94,7 +94,8 @@ typedef struct RelationSyncEntry\r\n> >\r\n> > /*\r\n> > * Did we send the schema? If ancestor relid is set, its schema\r\n> must also\r\n> > - * have been sent for this to be true.\r\n> > + * have been sent and the map to convert the relation's tuples into\r\n> the\r\n> > + * ancestor's format created before this can be set to be true.\r\n> > */\r\n> > bool schema_sent;\r\n> > List *streamed_txns; /* streamed toplevel\r\n> transactions with this\r\n> >\r\n> >\r\n> > I suggest to insert a comma between 'created' and 'before'\r\n> > because the sentence is a bit long and confusing.\r\n> >\r\n> > Or, I thought another comment idea for this part, because the original\r\n> > one doesn't care about the cycle of the reset.\r\n> >\r\n> > \"To avoid repetition to send the schema, this is set true after its first\r\n> transmission.\r\n> > Reset when any change of the relation definition is possible. If\r\n> > ancestor relid is set, its schema must have also been sent while the\r\n> > map to convert the relation's tuples into the ancestor's format created,\r\n> before this flag is set true.\"\r\n> >\r\n> > (2) comment in rel_sync_cache_relation_cb()\r\n> >\r\n> > @@ -1190,13 +1208,25 @@ rel_sync_cache_relation_cb(Datum arg, Oid\r\n> relid)\r\n> >\r\n> > HASH_FIND, NULL);\r\n> >\r\n> > /*\r\n> > - * Reset schema sent status as the relation definition may have\r\n> changed.\r\n> > + * Reset schema sent status as the relation definition may have\r\n> changed,\r\n> > + * also freeing any objects that depended on the earlier definition.\r\n> >\r\n> > How about divide this sentence into two sentences like \"Reset schema\r\n> > sent status as the relation definition may have changed.\r\n> > Also, free any objects that depended on the earlier definition.\"\r\n> \r\n> Thanks for reading it over. I have revised comments in a way that hopefully\r\n> addresses your concerns.\r\nThank you for your fix.\r\nI think the patches look good to me.\r\n\r\nJust in case, I'll report that the two patches succeeded\r\nin the RT as expected and from my side,\r\nthere's no more suggestions.\r\nThose are ready for committer, I think.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Thu, 20 May 2021 12:39:06 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Thu, May 20, 2021 at 5:59 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n> On Tuesday, May 18, 2021 3:30 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > While doing so, it occurred to me (maybe not for the first time) that we are\n> > *unnecessarily* doing send_relation_and_attrs() for a relation if the changes\n> > will be published using an ancestor's schema. In that case, sending only the\n> > ancestor's schema suffices AFAICS. Changing the code that way doesn't\n> > break any tests. I propose that we fix that too.\n> I've analyzed this new change's validity.\n> My conclusion for this is that we don't have\n> any bad impact from this, which means your additional fix is acceptable.\n> I think this addition blurs the purpose of the patch a bit, though.\n\nOkay, I've extracted that change into 0002.\n\n> With the removal of the send_relation_and_attrs() of the patch,\n> we don't send one pair of LOGICAL_REP_MSG_TYPE('Y'),\n> LOGICAL_REP_MSG_RELATION('R') message to the subscriber\n> when we use ancestor. Therefore, we become\n> not to register or update type and relation for maybe_send_schema()'s\n> argument 'relation' with the patch, in the case to use ancestor's schema.\n> However, both the pgoutput_change() and pgoutput_truncate()\n> have conditions to check oids to send to the subscriber for any operations.\n> Accordingly, the pair information for that argument 'relation'\n> aren't used on the subscriber in that case and we are fine.\n\nThanks for checking that.\n\nHere are updated/divided patches.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 20 May 2021 21:58:47 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Thursday, May 20, 2021 9:59 PM Amit Langote <amitlangote09@gmail.com> wrote:\r\n> Here are updated/divided patches.\r\nThanks for your updates.\r\n\r\nBut, I've detected segmentation faults caused by the patch,\r\nwhich can happen during 100_bugs.pl in src/test/subscription.\r\nThis happens more than one in ten times.\r\n\r\nThis problem would be a timing issue and has been introduced by v3 already.\r\nI used v5 for HEAD also and reproduced this failure, while\r\nOSS HEAD doesn't reproduce this, even when I executed 100_bugs.pl 200 times in a tight loop.\r\nI aligned the commit id 4f586fe2 for all check. Below logs are ones I got from v3.\r\n\r\n* The message of the failure during TAP test.\r\n\r\n# Postmaster PID for node \"twoways\" is 5015\r\nWaiting for replication conn testsub's replay_lsn to pass pg_current_wal_lsn() on twoways\r\n# poll_query_until timed out executing this query:\r\n# SELECT pg_current_wal_lsn() <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name = 'testsub';\r\n# expecting this output:\r\n# t\r\n# last actual query output:\r\n#\r\n# with stderr:\r\n# psql: error: connection to server on socket \"/tmp/cs8dhFOtZZ/.s.PGSQL.59345\" failed: No such file or directory\r\n# Is the server running locally and accepting connections on that socket?\r\ntimed out waiting for catchup at t/100_bugs.pl line 148.\r\n\r\n\r\nThe failure produces core file and its back trace is below.\r\nMy first guess of the cause is that between the timing to get an entry from hash_search() in get_rel_sync_entry()\r\nand to set the map by convert_tuples_by_name() in maybe_send_schema(), we had invalidation message,\r\nwhich tries to free unset descs in the entry ?\r\n\r\n* core file backtrace\r\n\r\nCore was generated by `postgres: twoways: walsender k5user [local] START_REPLICATION '.\r\nProgram terminated with signal 11, Segmentation fault.\r\n#0 0x00007f93b38b8c2b in rel_sync_cache_relation_cb (arg=0, relid=16388) at pgoutput.c:1225\r\n1225 FreeTupleDesc(entry->map->indesc);\r\nMissing separate debuginfos, use: debuginfo-install libgcc-4.8.5-44.el7.x86_64 libicu-50.2-4.el7_7.x86_64 libstdc++-4.8.5-44.el7.x86_64\r\n(gdb) bt\r\n#0 0x00007f93b38b8c2b in rel_sync_cache_relation_cb (arg=0, relid=16388) at pgoutput.c:1225\r\n#1 0x0000000000ae21f0 in LocalExecuteInvalidationMessage (msg=0x21d1de8) at inval.c:601\r\n#2 0x00000000008dbd6e in ReorderBufferExecuteInvalidations (nmsgs=4, msgs=0x21d1db8) at reorderbuffer.c:3232\r\n#3 0x00000000008da70a in ReorderBufferProcessTXN (rb=0x21d1a40, txn=0x2210b58, commit_lsn=25569096, snapshot_now=0x21d1e10, command_id=1, streaming=false)\r\n at reorderbuffer.c:2294\r\n#4 0x00000000008dae56 in ReorderBufferReplay (txn=0x2210b58, rb=0x21d1a40, xid=748, commit_lsn=25569096, end_lsn=25569216, commit_time=674891531661619,\r\n origin_id=0, origin_lsn=0) at reorderbuffer.c:2591\r\n#5 0x00000000008daede in ReorderBufferCommit (rb=0x21d1a40, xid=748, commit_lsn=25569096, end_lsn=25569216, commit_time=674891531661619, origin_id=0,\r\n origin_lsn=0) at reorderbuffer.c:2615\r\n#6 0x00000000008cae06 in DecodeCommit (ctx=0x21e6880, buf=0x7fffb9383db0, parsed=0x7fffb9383c10, xid=748, two_phase=false) at decode.c:744\r\n#7 0x00000000008ca1ed in DecodeXactOp (ctx=0x21e6880, buf=0x7fffb9383db0) at decode.c:278\r\n#8 0x00000000008c9e76 in LogicalDecodingProcessRecord (ctx=0x21e6880, record=0x21e6c80) at decode.c:142\r\n#9 0x0000000000901fcc in XLogSendLogical () at walsender.c:2876\r\n#10 0x0000000000901327 in WalSndLoop (send_data=0x901f30 <XLogSendLogical>) at walsender.c:2306\r\n#11 0x00000000008ffd5f in StartLogicalReplication (cmd=0x219aff8) at walsender.c:1206\r\n#12 0x00000000009006ae in exec_replication_command (\r\n cmd_string=0x2123c20 \"START_REPLICATION SLOT \\\"pg_16400_sync_16392_6964617299612181363\\\" LOGICAL 0/182D058 (proto_version '2', publication_names '\\\"testpub\\\"')\") at walsender.c:1646\r\n#13 0x000000000096ffae in PostgresMain (argc=1, argv=0x7fffb93840d0, dbname=0x214ef18 \"d1\", username=0x214eef8 \"k5user\") at postgres.c:4482\r\n\r\nI'll update when I get more information.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Fri, 21 May 2021 06:55:10 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Friday, May 21, 2021 3:55 PM I wrote:\r\n> On Thursday, May 20, 2021 9:59 PM Amit Langote\r\n> <amitlangote09@gmail.com> wrote:\r\n> > Here are updated/divided patches.\r\n> Thanks for your updates.\r\n> \r\n> But, I've detected segmentation faults caused by the patch, which can\r\n> happen during 100_bugs.pl in src/test/subscription.\r\n> This happens more than one in ten times.\r\n> \r\n> This problem would be a timing issue and has been introduced by v3 already.\r\n> I used v5 for HEAD also and reproduced this failure, while OSS HEAD doesn't\r\n> reproduce this, even when I executed 100_bugs.pl 200 times in a tight loop.\r\n> I aligned the commit id 4f586fe2 for all check. Below logs are ones I got from v3.\r\n> \r\n> * The message of the failure during TAP test.\r\n> \r\n> # Postmaster PID for node \"twoways\" is 5015 Waiting for replication conn\r\n> testsub's replay_lsn to pass pg_current_wal_lsn() on twoways #\r\n> poll_query_until timed out executing this query:\r\n> # SELECT pg_current_wal_lsn() <= replay_lsn AND state = 'streaming'\r\n> FROM pg_catalog.pg_stat_replication WHERE application_name = 'testsub';\r\n> # expecting this output:\r\n> # t\r\n> # last actual query output:\r\n> #\r\n> # with stderr:\r\n> # psql: error: connection to server on socket\r\n> \"/tmp/cs8dhFOtZZ/.s.PGSQL.59345\" failed: No such file or directory\r\n> # Is the server running locally and accepting connections on that\r\n> socket?\r\n> timed out waiting for catchup at t/100_bugs.pl line 148.\r\n> \r\n> \r\n> The failure produces core file and its back trace is below.\r\n> My first guess of the cause is that between the timing to get an entry from\r\n> hash_search() in get_rel_sync_entry() and to set the map by\r\n> convert_tuples_by_name() in maybe_send_schema(), we had invalidation\r\n> message, which tries to free unset descs in the entry ?\r\nSorry, this guess was not accurate at all.\r\nPlease ignore this because we need to have the entry->map set\r\nto free descs. Sorry for making noises.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Fri, 21 May 2021 07:26:32 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Fri, May 21, 2021 at 3:55 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n> On Thursday, May 20, 2021 9:59 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Here are updated/divided patches.\n> Thanks for your updates.\n>\n> But, I've detected segmentation faults caused by the patch,\n> which can happen during 100_bugs.pl in src/test/subscription.\n> This happens more than one in ten times.\n>\n> This problem would be a timing issue and has been introduced by v3 already.\n> I used v5 for HEAD also and reproduced this failure, while\n> OSS HEAD doesn't reproduce this, even when I executed 100_bugs.pl 200 times in a tight loop.\n> I aligned the commit id 4f586fe2 for all check. Below logs are ones I got from v3.\n>\n> My first guess of the cause is that between the timing to get an entry from hash_search() in get_rel_sync_entry()\n> and to set the map by convert_tuples_by_name() in maybe_send_schema(), we had invalidation message,\n> which tries to free unset descs in the entry ?\n\nHmm, maybe get_rel_syn_entry() should explicitly set map to NULL when\nfirst initializing an entry. It's possible that without doing so, the\nmap remains set to a garbage value, which causes the invalidation\ncallback that runs into such partially initialized entry to segfault\nupon trying to deference that garbage pointer.\n\nI've tried that in the attached v6 patches. Please check.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 21 May 2021 16:42:42 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Friday, May 21, 2021 4:43 PM Amit Langote <amitlangote09@gmail.com> wrote:\r\n> On Fri, May 21, 2021 at 3:55 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> > But, I've detected segmentation faults caused by the patch, which can\r\n> > happen during 100_bugs.pl in src/test/subscription.\r\n> > This happens more than one in ten times.\r\n> >\r\n> > This problem would be a timing issue and has been introduced by v3\r\n> already.\r\n> > I used v5 for HEAD also and reproduced this failure, while OSS HEAD\r\n> > doesn't reproduce this, even when I executed 100_bugs.pl 200 times in a\r\n> tight loop.\r\n> > I aligned the commit id 4f586fe2 for all check. Below logs are ones I got from\r\n> v3.\r\n> >\r\n> > My first guess of the cause is that between the timing to get an entry\r\n> > from hash_search() in get_rel_sync_entry() and to set the map by\r\n> > convert_tuples_by_name() in maybe_send_schema(), we had invalidation\r\n> message, which tries to free unset descs in the entry ?\r\n> \r\n> Hmm, maybe get_rel_syn_entry() should explicitly set map to NULL when first\r\n> initializing an entry. It's possible that without doing so, the map remains set\r\n> to a garbage value, which causes the invalidation callback that runs into such\r\n> partially initialized entry to segfault upon trying to deference that garbage\r\n> pointer.\r\nJust in case, I prepared a new PG and\r\ndid a check to make get_rel_sync_entry() print its first pointer value with elog.\r\nHere, when I executed 100_bugs.pl, I got some garbage below.\r\n\r\n* The change I did:\r\n@@ -1011,6 +1011,7 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)\r\n entry->pubactions.pubinsert = entry->pubactions.pubupdate =\r\n entry->pubactions.pubdelete = entry->pubactions.pubtruncate = false;\r\n entry->publish_as_relid = InvalidOid;\r\n+ elog(LOG, \"**> the pointer's default value : %p\", entry->map);\r\n }\r\n\r\n* Grep result of all logs from 100_bugs.pl\r\n2021-05-21 09:05:56.132 UTC [29122] sub1 LOG: **> the pointer's default value : (nil)\r\n2021-05-21 09:06:11.556 UTC [30198] testsub1 LOG: **> the pointer's default value : (nil)\r\n2021-05-21 09:06:11.561 UTC [30200] pg_16389_sync_16384_6964667281140237667 LOG: **> the pointer's default value : 0x7f7f7f7f7f7f7f7f\r\n2021-05-21 09:06:11.567 UTC [30191] testsub2 LOG: **> the pointer's default value : (nil)\r\n2021-05-21 09:06:11.570 UTC [30194] pg_16387_sync_16384_6964667292923737489 LOG: **> the pointer's default value : 0x7f7f7f7f7f7f7f7f\r\n2021-05-21 09:06:02.513 UTC [29809] testsub LOG: **> the pointer's default value : (nil)\r\n2021-05-21 09:06:02.557 UTC [29809] testsub LOG: **> the pointer's default value : (nil)\r\n\r\nSo, your solution is right, I think.\r\n\r\n> I've tried that in the attached v6 patches. Please check.\r\nWith this fix, I don't get the failure.\r\nI executed 100_bugs.pl 100 times in a loop and didn't face that problem.\r\n\r\nAgain, I conducted one make check-world for each combination\r\n* use OSS HEAD or PG13\r\n* apply only the first patch or both two patches\r\nThose all passed.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Fri, 21 May 2021 12:44:31 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Friday, May 21, 2021 9:45 PM I worte:\r\n> On Friday, May 21, 2021 4:43 PM Amit Langote <amitlangote09@gmail.com>\r\n> wrote:\r\n> > On Fri, May 21, 2021 at 3:55 PM osumi.takamichi@fujitsu.com\r\n> > <osumi.takamichi@fujitsu.com> wrote:\r\n> > > But, I've detected segmentation faults caused by the patch, which\r\n> > > can happen during 100_bugs.pl in src/test/subscription.\r\n> >\r\n> > Hmm, maybe get_rel_syn_entry() should explicitly set map to NULL when\r\n> > first initializing an entry. It's possible that without doing so, the\r\n> > map remains set to a garbage value, which causes the invalidation\r\n> > callback that runs into such partially initialized entry to segfault\r\n> > upon trying to deference that garbage pointer.\r\n> Just in case, I prepared a new PG and\r\n> did a check to make get_rel_sync_entry() print its first pointer value with elog.\r\n> Here, when I executed 100_bugs.pl, I got some garbage below.\r\n> \r\n> * The change I did:\r\n> @@ -1011,6 +1011,7 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)\r\n> entry->pubactions.pubinsert =\r\n> entry->pubactions.pubupdate =\r\n> entry->pubactions.pubdelete =\r\n> entry->pubactions.pubtruncate = false;\r\n> entry->publish_as_relid = InvalidOid;\r\n> + elog(LOG, \"**> the pointer's default value : %p\",\r\n> + entry->map);\r\n> }\r\n>\r\n(snip)\r\n> \r\n> So, your solution is right, I think.\r\nThis was a bit indirect.\r\nI've checked the core file of v3's failure core and printed the entry\r\nto get more confidence. Sorry for inappropriate measure to verify the solution.\r\n\r\n$1 = {relid = 16388, schema_sent = false, streamed_txns = 0x0, replicate_valid = false, pubactions = {pubinsert = false, pubupdate = false, pubdelete = false, pubtruncate = false}, publish_as_relid = 16388,\r\n map = 0x7f7f7f7f7f7f7f7f}\r\n\r\nYes, the process tried to free garbage.\r\nNow, we are convinced that we have addressed the problem. That's it !\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Sat, 22 May 2021 02:00:52 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Sat, May 22, 2021 at 11:00 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n> On Friday, May 21, 2021 9:45 PM I worte:\n> > On Friday, May 21, 2021 4:43 PM Amit Langote <amitlangote09@gmail.com>\n> > wrote:\n> > > On Fri, May 21, 2021 at 3:55 PM osumi.takamichi@fujitsu.com\n> > > <osumi.takamichi@fujitsu.com> wrote:\n> > > > But, I've detected segmentation faults caused by the patch, which\n> > > > can happen during 100_bugs.pl in src/test/subscription.\n> > >\n> > > Hmm, maybe get_rel_syn_entry() should explicitly set map to NULL when\n> > > first initializing an entry. It's possible that without doing so, the\n> > > map remains set to a garbage value, which causes the invalidation\n> > > callback that runs into such partially initialized entry to segfault\n> > > upon trying to deference that garbage pointer.\n> > Just in case, I prepared a new PG and\n> > did a check to make get_rel_sync_entry() print its first pointer value with elog.\n> > Here, when I executed 100_bugs.pl, I got some garbage below.\n> >\n> > * The change I did:\n> > @@ -1011,6 +1011,7 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)\n> > entry->pubactions.pubinsert =\n> > entry->pubactions.pubupdate =\n> > entry->pubactions.pubdelete =\n> > entry->pubactions.pubtruncate = false;\n> > entry->publish_as_relid = InvalidOid;\n> > + elog(LOG, \"**> the pointer's default value : %p\",\n> > + entry->map);\n> > }\n> >\n> (snip)\n> >\n> > So, your solution is right, I think.\n> This was a bit indirect.\n> I've checked the core file of v3's failure core and printed the entry\n> to get more confidence. Sorry for inappropriate measure to verify the solution.\n>\n> $1 = {relid = 16388, schema_sent = false, streamed_txns = 0x0, replicate_valid = false, pubactions = {pubinsert = false, pubupdate = false, pubdelete = false, pubtruncate = false}, publish_as_relid = 16388,\n> map = 0x7f7f7f7f7f7f7f7f}\n>\n> Yes, the process tried to free garbage.\n> Now, we are convinced that we have addressed the problem. That's it !\n\nThanks for confirming that.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 22 May 2021 11:57:37 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Saturday, May 22, 2021 11:58 AM Amit Langote <amitlangote09@gmail.com> wrote:\r\n> On Sat, May 22, 2021 at 11:00 AM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> > I've checked the core file of v3's failure core and printed the entry\r\n> > to get more confidence. Sorry for inappropriate measure to verify the\r\n> solution.\r\n> >\r\n> > $1 = {relid = 16388, schema_sent = false, streamed_txns = 0x0,\r\n> replicate_valid = false, pubactions = {pubinsert = false, pubupdate = false,\r\n> pubdelete = false, pubtruncate = false}, publish_as_relid = 16388,\r\n> > map = 0x7f7f7f7f7f7f7f7f}\r\n> >\r\n> > Yes, the process tried to free garbage.\r\n> > Now, we are convinced that we have addressed the problem. That's it !\r\n> \r\n> Thanks for confirming that.\r\nLangote-san, I need to report another issue.\r\n\r\nWhen I execute make check-world with v6 additionally,\r\nI've gotten another failure. I get this about once in\r\n20 times of make check-world with v6.\r\n\r\nThe test ended with stderr outputs below.\r\n\r\nNOTICE: database \"regression\" does not exist, skipping\r\nmake[2]: *** [check] Error 1\r\nmake[1]: *** [check-isolation-recurse] Error 2\r\nmake[1]: *** Waiting for unfinished jobs....\r\nmake: *** [check-world-src/test-recurse] Error 2\r\nmake: *** Waiting for unfinished jobs....\r\n\r\nAnd, I had ./src/test/isolation/output_iso/regression.diffs and regression.out,\r\nwhich told me below.\r\n\r\ntest detach-partition-concurrently-1 ... ok 705 ms\r\ntest detach-partition-concurrently-2 ... ok 260 ms\r\ntest detach-partition-concurrently-3 ... FAILED 618 ms\r\ntest detach-partition-concurrently-4 ... ok 1384 ms\r\n\r\nThe diffs file showed me below.\r\n\r\ndiff -U3 /home/k5user/new_disk/repro_fail_v6/src/test/isolation/expected/detach-partition-concurrently-3.out /home/k5user/new_disk/repro_fail_v6/src/test/isolation/output_iso/results/detach-partition-concurrently-3.out\r\n--- /home/k5user/new_disk/repro_fail_v6/src/test/isolation/expected/detach-partition-concurrently-3.out 2021-05-24 01:22:22.381488295 +0000\r\n+++ /home/k5user/new_disk/repro_fail_v6/src/test/isolation/output_iso/results/detach-partition-concurrently-3.out 2021-05-24 02:47:08.292488295 +0000\r\n@@ -190,7 +190,7 @@\r\n\r\n t\r\n step s2detach: <... completed>\r\n-error in steps s1cancel s2detach: ERROR: canceling statement due to user request\r\n+ERROR: canceling statement due to user request\r\n step s2detach2: ALTER TABLE d3_listp DETACH PARTITION d3_listp2 CONCURRENTLY;\r\n ERROR: partition \"d3_listp1\" already pending detach in partitioned table \"public.d3_listp\"\r\n step s1c: COMMIT;\r\n\r\nI'm not sure if this is related to the patch or we already have this from OSS HEAD yet.\r\n\r\nFYI: the steps I did are \r\n1 - clone PG(I used f5024d8)\r\n2 - git am the 2 patches for HEAD\r\n\t* HEAD-v6-0001-pgoutput-fix-memory-management-of-RelationSyncEnt.patch\r\n\t* HEAD-v6-0002-pgoutput-don-t-send-leaf-partition-schema-when-pu.patch\r\n3 - configure with --enable-cassert --enable-debug --enable-tap-tests --with-icu CFLAGS=-O0 --prefix=/where/you/wanna/put/PG\r\n4 - make -j2 2> make.log # did not get stderr output.\r\n5 - make check-world -j8 2> make_check_world.log\r\n\t(after this I've conducted another tight loop test by repeating make check-world and got the error)\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Mon, 24 May 2021 03:15:57 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Mon, May 24, 2021 at 12:16 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n> On Saturday, May 22, 2021 11:58 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Sat, May 22, 2021 at 11:00 AM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > > I've checked the core file of v3's failure core and printed the entry\n> > > to get more confidence. Sorry for inappropriate measure to verify the\n> > solution.\n> > >\n> > > $1 = {relid = 16388, schema_sent = false, streamed_txns = 0x0,\n> > replicate_valid = false, pubactions = {pubinsert = false, pubupdate = false,\n> > pubdelete = false, pubtruncate = false}, publish_as_relid = 16388,\n> > > map = 0x7f7f7f7f7f7f7f7f}\n> > >\n> > > Yes, the process tried to free garbage.\n> > > Now, we are convinced that we have addressed the problem. That's it !\n> >\n> > Thanks for confirming that.\n> Langote-san, I need to report another issue.\n\nThanks for continued testing.\n\n> When I execute make check-world with v6 additionally,\n> I've gotten another failure. I get this about once in\n> 20 times of make check-world with v6.\n>\n> The test ended with stderr outputs below.\n>\n> NOTICE: database \"regression\" does not exist, skipping\n> make[2]: *** [check] Error 1\n> make[1]: *** [check-isolation-recurse] Error 2\n> make[1]: *** Waiting for unfinished jobs....\n> make: *** [check-world-src/test-recurse] Error 2\n> make: *** Waiting for unfinished jobs....\n>\n> And, I had ./src/test/isolation/output_iso/regression.diffs and regression.out,\n> which told me below.\n>\n> test detach-partition-concurrently-1 ... ok 705 ms\n> test detach-partition-concurrently-2 ... ok 260 ms\n> test detach-partition-concurrently-3 ... FAILED 618 ms\n> test detach-partition-concurrently-4 ... ok 1384 ms\n>\n> The diffs file showed me below.\n>\n> diff -U3 /home/k5user/new_disk/repro_fail_v6/src/test/isolation/expected/detach-partition-concurrently-3.out /home/k5user/new_disk/repro_fail_v6/src/test/isolation/output_iso/results/detach-partition-concurrently-3.out\n> --- /home/k5user/new_disk/repro_fail_v6/src/test/isolation/expected/detach-partition-concurrently-3.out 2021-05-24 01:22:22.381488295 +0000\n> +++ /home/k5user/new_disk/repro_fail_v6/src/test/isolation/output_iso/results/detach-partition-concurrently-3.out 2021-05-24 02:47:08.292488295 +0000\n> @@ -190,7 +190,7 @@\n>\n> t\n> step s2detach: <... completed>\n> -error in steps s1cancel s2detach: ERROR: canceling statement due to user request\n> +ERROR: canceling statement due to user request\n> step s2detach2: ALTER TABLE d3_listp DETACH PARTITION d3_listp2 CONCURRENTLY;\n> ERROR: partition \"d3_listp1\" already pending detach in partitioned table \"public.d3_listp\"\n> step s1c: COMMIT;\n>\n> I'm not sure if this is related to the patch or we already have this from OSS HEAD yet.\n\nHmm, I doubt it would be this patch's fault. Maybe we still have some\nunresolved issues with DETACH PARTITION CONCURRENTLY. I suggest you\nreport this in a new thread preferably after you figure that it's\nreproducible.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 24 May 2021 12:22:35 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Monday, May 24, 2021 12:23 PM Amit Langote <amitlangote09@gmail.com> wrote:\r\n> On Mon, May 24, 2021 at 12:16 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> > When I execute make check-world with v6 additionally, I've gotten\r\n> > another failure. I get this about once in\r\n> > 20 times of make check-world with v6.\r\n> Hmm, I doubt it would be this patch's fault. Maybe we still have some\r\n> unresolved issues with DETACH PARTITION CONCURRENTLY. I suggest\r\n> you report this in a new thread preferably after you figure that it's\r\n> reproducible.\r\nOK, I'll do so when I get this with OSS HEAD.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Mon, 24 May 2021 03:57:26 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Monday, May 24, 2021 12:57 PM I wrote:\r\n> On Monday, May 24, 2021 12:23 PM Amit Langote <amitlangote09@gmail.com>\r\n> wrote:\r\n> > On Mon, May 24, 2021 at 12:16 PM osumi.takamichi@fujitsu.com\r\n> > <osumi.takamichi@fujitsu.com> wrote:\r\n> > > When I execute make check-world with v6 additionally, I've gotten\r\n> > > another failure. I get this about once in\r\n> > > 20 times of make check-world with v6.\r\n> > Hmm, I doubt it would be this patch's fault. Maybe we still have some\r\n> > unresolved issues with DETACH PARTITION CONCURRENTLY. I suggest\r\n> you\r\n> > report this in a new thread preferably after you figure that it's\r\n> > reproducible.\r\n> OK, I'll do so when I get this with OSS HEAD.\r\nJust now, I've reported this on hackers as a different thread.\r\nThis was not an issue of the patch.\r\n\r\nAlso, I have no more suggestions to fix the patch set you shared.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Mon, 24 May 2021 06:42:53 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Fri, May 21, 2021 at 1:12 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Hmm, maybe get_rel_syn_entry() should explicitly set map to NULL when\n> first initializing an entry. It's possible that without doing so, the\n> map remains set to a garbage value, which causes the invalidation\n> callback that runs into such partially initialized entry to segfault\n> upon trying to deference that garbage pointer.\n>\n> I've tried that in the attached v6 patches. Please check.\n>\n\nv6-0001\n=========\n+ send_relation_and_attrs(ancestor, xid, ctx);\n+\n /* Map must live as long as the session does. */\n oldctx = MemoryContextSwitchTo(CacheMemoryContext);\n- relentry->map = convert_tuples_by_name(CreateTupleDescCopy(indesc),\n- CreateTupleDescCopy(outdesc));\n+\n+ /*\n+ * Make copies of the TupleDescs that will live as long as the map\n+ * does before putting into the map.\n+ */\n+ indesc = CreateTupleDescCopy(indesc);\n+ outdesc = CreateTupleDescCopy(outdesc);\n+ relentry->map = convert_tuples_by_name(indesc, outdesc);\n+ if (relentry->map == NULL)\n+ {\n+ /* Map not necessary, so free the TupleDescs too. */\n+ FreeTupleDesc(indesc);\n+ FreeTupleDesc(outdesc);\n+ }\n+\n MemoryContextSwitchTo(oldctx);\n- send_relation_and_attrs(ancestor, xid, ctx);\n\nWhy do we need to move send_relation_and_attrs() call? I think it\ndoesn't matter much either way but OTOH, in the existing code, if\nthere is an error (say 'out of memory' or some other) while building\nthe map, we won't send relation attrs whereas with your change we will\nunnecessarily send those in such a case.\n\nI feel there is no need to backpatch v6-0002. We can just make it a\nHEAD-only change as that doesn't cause any bug even though it is\nbetter not to send it. If we consider it as a HEAD-only change then\nprobably we can register it for PG-15. What do you think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 27 May 2021 12:05:57 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Thu, May 27, 2021 at 3:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Fri, May 21, 2021 at 1:12 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> > Hmm, maybe get_rel_syn_entry() should explicitly set map to NULL when\n> > first initializing an entry. It's possible that without doing so, the\n> > map remains set to a garbage value, which causes the invalidation\n> > callback that runs into such partially initialized entry to segfault\n> > upon trying to deference that garbage pointer.\n> >\n> > I've tried that in the attached v6 patches. Please check.\n> >\n>\n> v6-0001\n> =========\n> + send_relation_and_attrs(ancestor, xid, ctx);\n> +\n> /* Map must live as long as the session does. */\n> oldctx = MemoryContextSwitchTo(CacheMemoryContext);\n> - relentry->map = convert_tuples_by_name(CreateTupleDescCopy(indesc),\n> - CreateTupleDescCopy(outdesc));\n> +\n> + /*\n> + * Make copies of the TupleDescs that will live as long as the map\n> + * does before putting into the map.\n> + */\n> + indesc = CreateTupleDescCopy(indesc);\n> + outdesc = CreateTupleDescCopy(outdesc);\n> + relentry->map = convert_tuples_by_name(indesc, outdesc);\n> + if (relentry->map == NULL)\n> + {\n> + /* Map not necessary, so free the TupleDescs too. */\n> + FreeTupleDesc(indesc);\n> + FreeTupleDesc(outdesc);\n> + }\n> +\n> MemoryContextSwitchTo(oldctx);\n> - send_relation_and_attrs(ancestor, xid, ctx);\n>\n> Why do we need to move send_relation_and_attrs() call? I think it\n> doesn't matter much either way but OTOH, in the existing code, if\n> there is an error (say 'out of memory' or some other) while building\n> the map, we won't send relation attrs whereas with your change we will\n> unnecessarily send those in such a case.\n\nThat's a good point. I've reverted that change in the attached.\n\n> I feel there is no need to backpatch v6-0002. We can just make it a\n> HEAD-only change as that doesn't cause any bug even though it is\n> better not to send it. If we consider it as a HEAD-only change then\n> probably we can register it for PG-15. What do you think?\n\nOkay, I will see about creating a PG15 CF entry for 0002.\n\nPlease see attached v7-0001 with the part mentioned above fixed.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 31 May 2021 12:21:26 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Mon, May 31, 2021 at 8:51 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Thu, May 27, 2021 at 3:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Fri, May 21, 2021 at 1:12 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > >\n> >\n> > Why do we need to move send_relation_and_attrs() call? I think it\n> > doesn't matter much either way but OTOH, in the existing code, if\n> > there is an error (say 'out of memory' or some other) while building\n> > the map, we won't send relation attrs whereas with your change we will\n> > unnecessarily send those in such a case.\n>\n> That's a good point. I've reverted that change in the attached.\n>\n\nPushed.\n\n> > I feel there is no need to backpatch v6-0002. We can just make it a\n> > HEAD-only change as that doesn't cause any bug even though it is\n> > better not to send it. If we consider it as a HEAD-only change then\n> > probably we can register it for PG-15. What do you think?\n>\n> Okay, I will see about creating a PG15 CF entry for 0002.\n>\n\nThanks!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 1 Jun 2021 15:26:24 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" }, { "msg_contents": "On Tue, Jun 1, 2021 at 6:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Mon, May 31, 2021 at 8:51 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Thu, May 27, 2021 at 3:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > Why do we need to move send_relation_and_attrs() call? I think it\n> > > doesn't matter much either way but OTOH, in the existing code, if\n> > > there is an error (say 'out of memory' or some other) while building\n> > > the map, we won't send relation attrs whereas with your change we will\n> > > unnecessarily send those in such a case.\n> >\n> > That's a good point. I've reverted that change in the attached.\n>\n> Pushed.\n\nThank you.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 1 Jun 2021 21:04:38 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forget close an open relation in ReorderBufferProcessTXN()" } ]
[ { "msg_contents": "Hi,\n\nAttached patch removes \"is_foreign_table\" from transformCreateStmt()\nsince it already has cxt.isforeign that can serve the same purpose.\n\nRegards,\nAmul", "msg_date": "Thu, 15 Apr 2021 17:04:08 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Remove redundant variable from transformCreateStmt" }, { "msg_contents": "Thanks Amul, this looks pretty straight forward. LGTM.\nI have also run the regression on master and seems good.\n\nRegards,\nJeevan Ladhe\n\nThanks Amul, this looks pretty straight forward. LGTM.I have also run the regression on master and seems good.Regards,Jeevan Ladhe", "msg_date": "Thu, 15 Apr 2021 17:13:33 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Remove redundant variable from transformCreateStmt" }, { "msg_contents": "On Thu, Apr 15, 2021 at 5:04 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> Hi,\n>\n> Attached patch removes \"is_foreign_table\" from transformCreateStmt()\n> since it already has cxt.isforeign that can serve the same purpose.\n\nYeah having that variable as \"is_foreign_table\" doesn't make sense\nwhen we have the info in ctx. I'm wondering whether we can do the\nfollowing (like transformFKConstraints). It will be more readable and\nwe could also add more comments on why we don't skip validation for\ncheck constraints i.e. constraint->skip_validation = false in case for\nforeign tables.\n\nbool skip_validation = true;\n if (IsA(stmt, CreateForeignTableStmt))\n {\n cxt.stmtType = \"CREATE FOREIGN TABLE\";\n cxt.isforeign = true;\n skip_validation = false; ----> <<<add comments here>>>\n }\ntransformCheckConstraints(&cxt, skip_validation);\n\nAlternatively, we could also remove skipValidation function parameter\n(since transformCheckConstraints is a static function, I think it's\nokay) and modify transformCheckConstraints, then we can do following:\n\nIn transformCreateStmt:\nif (!ctx.isforeign)\n transformCheckConstraints(&ctx);\n\nIn transformAlterTableStmt: we can remove transformCheckConstraints\nentirely because calling transformCheckConstraints with skipValidation\n= false does nothing and has no value. This way we could save a\nfunction call.\n\nI prefer removing the skipValidation parameter from\ntransformCheckConstraints. Others might have different opinions.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 15 Apr 2021 17:47:45 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove redundant variable from transformCreateStmt" }, { "msg_contents": "On Thu, Apr 15, 2021 at 5:47 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Apr 15, 2021 at 5:04 PM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > Attached patch removes \"is_foreign_table\" from transformCreateStmt()\n> > since it already has cxt.isforeign that can serve the same purpose.\n>\n> Yeah having that variable as \"is_foreign_table\" doesn't make sense\n> when we have the info in ctx. I'm wondering whether we can do the\n> following (like transformFKConstraints). It will be more readable and\n> we could also add more comments on why we don't skip validation for\n> check constraints i.e. constraint->skip_validation = false in case for\n> foreign tables.\n>\n> bool skip_validation = true;\n> if (IsA(stmt, CreateForeignTableStmt))\n> {\n> cxt.stmtType = \"CREATE FOREIGN TABLE\";\n> cxt.isforeign = true;\n> skip_validation = false; ----> <<<add comments here>>>\n> }\n> transformCheckConstraints(&cxt, skip_validation);\n>\n> Alternatively, we could also remove skipValidation function parameter\n> (since transformCheckConstraints is a static function, I think it's\n> okay) and modify transformCheckConstraints, then we can do following:\n>\n> In transformCreateStmt:\n> if (!ctx.isforeign)\n> transformCheckConstraints(&ctx);\n>\n> In transformAlterTableStmt: we can remove transformCheckConstraints\n> entirely because calling transformCheckConstraints with skipValidation\n> = false does nothing and has no value. This way we could save a\n> function call.\n>\n> I prefer removing the skipValidation parameter from\n> transformCheckConstraints. Others might have different opinions.\n>\n\nThen we also need to remove the transformCheckConstraints() dummy call\nfrom transformAlterTableStmt() which was added for the readability.\nAlso, this change to transformCheckConstraints() will make it\ninconsistent with transformFKConstraints().\n\nI think we shouldn't worry too much about this function call overhead\nhere since this is a slow utility path, and that is the reason the\ncurrent structure doesn't really bother me.\n\nRegards,\nAmul\n\n\n", "msg_date": "Thu, 15 Apr 2021 18:34:11 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove redundant variable from transformCreateStmt" }, { "msg_contents": "IMHO, I think the idea here was to just get rid of an unnecessary variable\nrather than refactoring.\n\nOn Thu, Apr 15, 2021 at 5:48 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Thu, Apr 15, 2021 at 5:04 PM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > Attached patch removes \"is_foreign_table\" from transformCreateStmt()\n> > since it already has cxt.isforeign that can serve the same purpose.\n>\n> Yeah having that variable as \"is_foreign_table\" doesn't make sense\n> when we have the info in ctx. I'm wondering whether we can do the\n> following (like transformFKConstraints). It will be more readable and\n> we could also add more comments on why we don't skip validation for\n> check constraints i.e. constraint->skip_validation = false in case for\n> foreign tables.\n>\n\nTo address your concern here, I think it can be addressed by adding a\ncomment\njust before we make a call to transformCheckConstraints().\n\nIn transformAlterTableStmt: we can remove transformCheckConstraints\n> entirely because calling transformCheckConstraints with skipValidation\n> = false does nothing and has no value. This way we could save a\n> function call.\n>\n> I prefer removing the skipValidation parameter from\n> transformCheckConstraints. Others might have different opinions.\n>\n\nI think this is intentional, to keep the code consistent with the CREATE\nTABLE path i.e. transformCreateStmt(). Here is what the comment atop\ntransformCheckConstraints() reads:\n\n/*\n * transformCheckConstraints\n * handle CHECK constraints\n *\n * Right now, there's nothing to do here when called from ALTER TABLE,\n * but the other constraint-transformation functions are called in both\n * the CREATE TABLE and ALTER TABLE paths, so do the same here, and just\n * don't do anything if we're not authorized to skip validation.\n */\n\nThis was originally discussed in thread[1] and commit:\nf27a6b15e6566fba7748d0d9a3fc5bcfd52c4a1b\n\n[1]\nhttps://www.postgresql.org/message-id/flat/1238779931.11913728.1449143089410.JavaMail.yahoo%40mail.yahoo.com#f2d8318b6beef37dfff06baa9a1538b7\n\n\nRegards,\nJeevan Ladhe\n\nIMHO, I think the idea here was to just get rid of an unnecessary variablerather than refactoring.On Thu, Apr 15, 2021 at 5:48 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Thu, Apr 15, 2021 at 5:04 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> Hi,\n>\n> Attached patch removes \"is_foreign_table\" from transformCreateStmt()\n> since it already has cxt.isforeign that can serve the same purpose.\n\nYeah having that variable as \"is_foreign_table\" doesn't make sense\nwhen we have the info in ctx. I'm wondering whether we can do the\nfollowing (like transformFKConstraints). It will be more readable and\nwe could also add more comments on why we don't skip validation for\ncheck constraints i.e. constraint->skip_validation = false in case for\nforeign tables.To address your concern here, I think it can be addressed by adding a commentjust before we make a call to transformCheckConstraints().\nIn transformAlterTableStmt: we can remove transformCheckConstraints\nentirely because calling transformCheckConstraints with skipValidation\n= false does nothing and has no value. This way we could save a\nfunction call.\n\nI prefer removing the skipValidation parameter from\ntransformCheckConstraints. Others might have different opinions.I think this is intentional, to keep the code consistent with the CREATETABLE path i.e. transformCreateStmt(). Here is what the comment atoptransformCheckConstraints() reads:/* * transformCheckConstraints * handle CHECK constraints * * Right now, there's nothing to do here when called from ALTER TABLE, * but the other constraint-transformation functions are called in both * the CREATE TABLE and ALTER TABLE paths, so do the same here, and just * don't do anything if we're not authorized to skip validation. */This was originally discussed in thread[1] and commit: f27a6b15e6566fba7748d0d9a3fc5bcfd52c4a1b[1] https://www.postgresql.org/message-id/flat/1238779931.11913728.1449143089410.JavaMail.yahoo%40mail.yahoo.com#f2d8318b6beef37dfff06baa9a1538b7 Regards,Jeevan Ladhe", "msg_date": "Thu, 15 Apr 2021 20:39:47 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Remove redundant variable from transformCreateStmt" }, { "msg_contents": "On Thu, Apr 15, 2021 at 8:40 PM Jeevan Ladhe\n<jeevan.ladhe@enterprisedb.com> wrote:\n> IMHO, I think the idea here was to just get rid of an unnecessary variable\n> rather than refactoring.\n>\n> On Thu, Apr 15, 2021 at 5:48 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> On Thu, Apr 15, 2021 at 5:04 PM Amul Sul <sulamul@gmail.com> wrote:\n>> >\n>> > Hi,\n>> >\n>> > Attached patch removes \"is_foreign_table\" from transformCreateStmt()\n>> > since it already has cxt.isforeign that can serve the same purpose.\n>>\n>> Yeah having that variable as \"is_foreign_table\" doesn't make sense\n>> when we have the info in ctx. I'm wondering whether we can do the\n>> following (like transformFKConstraints). It will be more readable and\n>> we could also add more comments on why we don't skip validation for\n>> check constraints i.e. constraint->skip_validation = false in case for\n>> foreign tables.\n>\n> To address your concern here, I think it can be addressed by adding a comment\n> just before we make a call to transformCheckConstraints().\n\n+1. The comment * If creating a new table (but not a foreign table),\nwe can safely skip * in transformCheckConstraints just says that we\ndon't mark skip_validation = true for foreign tables. But the\ndiscussion that led to the commit 86705aa8 [1] has the information as\nto why it is so. Although, I have not gone through it entirely, having\nsomething like \"newly created foreign tables can have data at the\nmoment they created, so the constraint validation cannot be skipped\"\nin transformCreateStmt before calling transformCheckConstraints gives\nan idea as to why we don't skip validation.\n\n[1] - https://www.postgresql.org/message-id/flat/d2b7419f-4a71-cf86-cc99-bfd0f359a1ea%40lab.ntt.co.jp\n\n> I think this is intentional, to keep the code consistent with the CREATE\n> TABLE path i.e. transformCreateStmt(). Here is what the comment atop\n> transformCheckConstraints() reads:\n>\n> /*\n> * transformCheckConstraints\n> * handle CHECK constraints\n> *\n> * Right now, there's nothing to do here when called from ALTER TABLE,\n> * but the other constraint-transformation functions are called in both\n> * the CREATE TABLE and ALTER TABLE paths, so do the same here, and just\n> * don't do anything if we're not authorized to skip validation.\n> */\n\nYeah, I re-read it and it looks like it's intentional for consistency reasons.\n\nI'm not opposed to this patch as it clearly removes an unnecessary variable.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 16 Apr 2021 06:26:02 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove redundant variable from transformCreateStmt" }, { "msg_contents": "On Fri, Apr 16, 2021 at 6:26 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Apr 15, 2021 at 8:40 PM Jeevan Ladhe\n> <jeevan.ladhe@enterprisedb.com> wrote:\n> > IMHO, I think the idea here was to just get rid of an unnecessary variable\n> > rather than refactoring.\n> >\n> > On Thu, Apr 15, 2021 at 5:48 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >>\n> >> On Thu, Apr 15, 2021 at 5:04 PM Amul Sul <sulamul@gmail.com> wrote:\n> >> >\n> >> > Hi,\n> >> >\n> >> > Attached patch removes \"is_foreign_table\" from transformCreateStmt()\n> >> > since it already has cxt.isforeign that can serve the same purpose.\n> >>\n> >> Yeah having that variable as \"is_foreign_table\" doesn't make sense\n> >> when we have the info in ctx. I'm wondering whether we can do the\n> >> following (like transformFKConstraints). It will be more readable and\n> >> we could also add more comments on why we don't skip validation for\n> >> check constraints i.e. constraint->skip_validation = false in case for\n> >> foreign tables.\n> >\n> > To address your concern here, I think it can be addressed by adding a comment\n> > just before we make a call to transformCheckConstraints().\n>\n> +1.\n\nOk, added the comment in the attached version.\n\nThanks Jeevan & Bharat for the review.\n\nRegards,\nAmul", "msg_date": "Mon, 19 Apr 2021 09:28:06 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove redundant variable from transformCreateStmt" }, { "msg_contents": "On Mon, Apr 19, 2021 at 9:28 AM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Fri, Apr 16, 2021 at 6:26 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Thu, Apr 15, 2021 at 8:40 PM Jeevan Ladhe\n> > <jeevan.ladhe@enterprisedb.com> wrote:\n> > > IMHO, I think the idea here was to just get rid of an unnecessary variable\n> > > rather than refactoring.\n> > >\n> > > On Thu, Apr 15, 2021 at 5:48 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >>\n> > >> On Thu, Apr 15, 2021 at 5:04 PM Amul Sul <sulamul@gmail.com> wrote:\n> > >> >\n> > >> > Hi,\n> > >> >\n> > >> > Attached patch removes \"is_foreign_table\" from transformCreateStmt()\n> > >> > since it already has cxt.isforeign that can serve the same purpose.\n> > >>\n> > >> Yeah having that variable as \"is_foreign_table\" doesn't make sense\n> > >> when we have the info in ctx. I'm wondering whether we can do the\n> > >> following (like transformFKConstraints). It will be more readable and\n> > >> we could also add more comments on why we don't skip validation for\n> > >> check constraints i.e. constraint->skip_validation = false in case for\n> > >> foreign tables.\n> > >\n> > > To address your concern here, I think it can be addressed by adding a comment\n> > > just before we make a call to transformCheckConstraints().\n> >\n> > +1.\n>\n> Ok, added the comment in the attached version.\n\nKindly ignore the previous version -- has unnecessary change.\nSee the attached.\n\nRegards,\nAmul", "msg_date": "Mon, 19 Apr 2021 09:32:22 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove redundant variable from transformCreateStmt" }, { "msg_contents": "On Mon, Apr 19, 2021 at 9:32 AM Amul Sul <sulamul@gmail.com> wrote:\n> Kindly ignore the previous version -- has unnecessary change.\n> See the attached.\n\nThanks for the patch!\n\nHow about a slight rewording of the added comment to \"Constraints\nvalidation can be skipped for a newly created table as it contains no\ndata. However, this is not necessarily true for a foreign table.\"?\n\nYou may want to add it to the commitfest if not done already. And I\ndon't think we need to backpatch this as it's not critical.\n\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 19 Apr 2021 11:05:22 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove redundant variable from transformCreateStmt" }, { "msg_contents": "On Mon, Apr 19, 2021 at 11:05 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Apr 19, 2021 at 9:32 AM Amul Sul <sulamul@gmail.com> wrote:\n> > Kindly ignore the previous version -- has unnecessary change.\n> > See the attached.\n>\n> Thanks for the patch!\n>\n> How about a slight rewording of the added comment to \"Constraints\n> validation can be skipped for a newly created table as it contains no\n> data. However, this is not necessarily true for a foreign table.\"?\n>\n\nWell, wording is quite subjective, let's leave this to the committer\nfor the final decision, I don't see anything wrong with it.\n\n> You may want to add it to the commitfest if not done already. And I\n> don't think we need to backpatch this as it's not critical.\n\nThis is not fixing anything so not a relevant candidate for the backporting.\n\nRegards,\nAmul\n\n\n", "msg_date": "Mon, 19 Apr 2021 11:54:00 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove redundant variable from transformCreateStmt" }, { "msg_contents": "I'd do it like this. Note I removed an if/else block in addition to\nyour changes.\n\nI couldn't convince myself that this is worth pushing though; either we\npush it to all branches (which seems unwarranted) or we create\nback-patching hazards.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W", "msg_date": "Thu, 29 Apr 2021 11:20:06 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Remove redundant variable from transformCreateStmt" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> I'd do it like this. Note I removed an if/else block in addition to\n> your changes.\n\n> I couldn't convince myself that this is worth pushing though; either we\n> push it to all branches (which seems unwarranted) or we create\n> back-patching hazards.\n\nYeah ... an advantage of the if/else coding is that it'd likely be\nsimple to extend to cover additional statement types, should we ever\nwish to do that. The rendering you have here is nice and compact,\nbut it would not scale up well.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 29 Apr 2021 11:42:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove redundant variable from transformCreateStmt" }, { "msg_contents": "On 2021-Apr-29, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > I'd do it like this. Note I removed an if/else block in addition to\n> > your changes.\n> \n> > I couldn't convince myself that this is worth pushing though; either we\n> > push it to all branches (which seems unwarranted) or we create\n> > back-patching hazards.\n> \n> Yeah ... an advantage of the if/else coding is that it'd likely be\n> simple to extend to cover additional statement types, should we ever\n> wish to do that. The rendering you have here is nice and compact,\n> but it would not scale up well.\n\nThat makes sense. But that part is not in Amul's patch -- he was only\non about removing the is_foreign_table Boolean. If I remove the if/else\nblock change, does the rest of the patch looks something we'd want to\nhave? I kinda agree that the redundant variable is \"ugly\". Is it worth\nremoving? My hunch is no.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Thu, 29 Apr 2021 14:39:42 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Remove redundant variable from transformCreateStmt" }, { "msg_contents": "On Thu, Apr 29, 2021 at 02:39:42PM -0400, Alvaro Herrera wrote:\n> On 2021-Apr-29, Tom Lane wrote:\n> > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > > I'd do it like this. Note I removed an if/else block in addition to\n> > > your changes.\n> > \n> > > I couldn't convince myself that this is worth pushing though; either we\n> > > push it to all branches (which seems unwarranted) or we create\n> > > back-patching hazards.\n> > \n> > Yeah ... an advantage of the if/else coding is that it'd likely be\n> > simple to extend to cover additional statement types, should we ever\n> > wish to do that. The rendering you have here is nice and compact,\n> > but it would not scale up well.\n> \n> That makes sense. But that part is not in Amul's patch -- he was only\n> on about removing the is_foreign_table Boolean. If I remove the if/else\n> block change, does the rest of the patch looks something we'd want to\n> have? I kinda agree that the redundant variable is \"ugly\". Is it worth\n> removing? My hunch is no.\n\nGetting rid of a redundant, boolean variable is good not because it's more\nefficient but because it's one fewer LOC to read and maintain (and an\nopportunity for inconsistency, I suppose).\n\nAlso, this is a roundabout and too-verbose way to invert a boolean:\n| transformCheckConstraints(&cxt, !is_foreign_table ? true : false);\n\n-- \nJustin\n\nPS. It's also not pythonic ;)\n\n\n", "msg_date": "Thu, 29 Apr 2021 20:37:46 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Remove redundant variable from transformCreateStmt" }, { "msg_contents": "On Fri, Apr 30, 2021 at 7:07 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Apr 29, 2021 at 02:39:42PM -0400, Alvaro Herrera wrote:\n> > On 2021-Apr-29, Tom Lane wrote:\n> > > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > > > I'd do it like this. Note I removed an if/else block in addition to\n> > > > your changes.\n> > >\n> > > > I couldn't convince myself that this is worth pushing though; either we\n> > > > push it to all branches (which seems unwarranted) or we create\n> > > > back-patching hazards.\n> > >\n> > > Yeah ... an advantage of the if/else coding is that it'd likely be\n> > > simple to extend to cover additional statement types, should we ever\n> > > wish to do that. The rendering you have here is nice and compact,\n> > > but it would not scale up well.\n> >\n> > That makes sense. But that part is not in Amul's patch -- he was only\n> > on about removing the is_foreign_table Boolean. If I remove the if/else\n> > block change, does the rest of the patch looks something we'd want to\n> > have? I kinda agree that the redundant variable is \"ugly\". Is it worth\n> > removing? My hunch is no.\n>\n> Getting rid of a redundant, boolean variable is good not because it's more\n> efficient but because it's one fewer LOC to read and maintain (and an\n> opportunity for inconsistency, I suppose).\n\nYes.\n\n> Also, this is a roundabout and too-verbose way to invert a boolean:\n> | transformCheckConstraints(&cxt, !is_foreign_table ? true : false);\n\nI agree to remove only the redundant variable, is_foreign_table but\nnot the if else block as Tom said: it's not scalable. We don't need to\nback patch this change.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 30 Apr 2021 10:49:13 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove redundant variable from transformCreateStmt" }, { "msg_contents": "On Fri, Apr 30, 2021 at 10:49 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Apr 30, 2021 at 7:07 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Thu, Apr 29, 2021 at 02:39:42PM -0400, Alvaro Herrera wrote:\n> > > On 2021-Apr-29, Tom Lane wrote:\n> > > > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > > > > I'd do it like this. Note I removed an if/else block in addition to\n> > > > > your changes.\n> > > >\n> > > > > I couldn't convince myself that this is worth pushing though; either we\n> > > > > push it to all branches (which seems unwarranted) or we create\n> > > > > back-patching hazards.\n> > > >\n> > > > Yeah ... an advantage of the if/else coding is that it'd likely be\n> > > > simple to extend to cover additional statement types, should we ever\n> > > > wish to do that. The rendering you have here is nice and compact,\n> > > > but it would not scale up well.\n> > >\n> > > That makes sense. But that part is not in Amul's patch -- he was only\n> > > on about removing the is_foreign_table Boolean. If I remove the if/else\n> > > block change, does the rest of the patch looks something we'd want to\n> > > have? I kinda agree that the redundant variable is \"ugly\". Is it worth\n> > > removing? My hunch is no.\n> >\n> > Getting rid of a redundant, boolean variable is good not because it's more\n> > efficient but because it's one fewer LOC to read and maintain (and an\n> > opportunity for inconsistency, I suppose).\n>\n> Yes.\n>\n> > Also, this is a roundabout and too-verbose way to invert a boolean:\n> > | transformCheckConstraints(&cxt, !is_foreign_table ? true : false);\n>\n> I agree to remove only the redundant variable, is_foreign_table but\n> not the if else block as Tom said: it's not scalable.\n\n+1.\n\nRegards,\nAmul\n\n\n", "msg_date": "Mon, 3 May 2021 09:26:45 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove redundant variable from transformCreateStmt" }, { "msg_contents": "On 2021-Apr-29, Justin Pryzby wrote:\n\n> Getting rid of a redundant, boolean variable is good not because it's more\n> efficient but because it's one fewer LOC to read and maintain (and an\n> opportunity for inconsistency, I suppose).\n\nMakes sense. Pushed. Thanks everyone.\n\n> Also, this is a roundabout and too-verbose way to invert a boolean:\n> | transformCheckConstraints(&cxt, !is_foreign_table ? true : false);\n\nIt is, yeah.\n\n> PS. It's also not pythonic ;)\n\nUmm. If you say so. But this is not Python ...\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"El hombre nunca sabe de lo que es capaz hasta que lo intenta\" (C. Dickens)\n\n\n", "msg_date": "Thu, 6 May 2021 17:32:01 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Remove redundant variable from transformCreateStmt" } ]
[ { "msg_contents": "Hi,\n\nCommit mentioned in the $subject changed the FirstBootstrapObjectId\n(transam.h) from 12000 to 13000. I was trying to understand the reason\nbehind this change, but was not able to gather that information. Also didn't\nfind anything in the commit message either.\n\nCan you please explain those changes? Is it accidental or intentional?\n\nThanks,\nRushabh Lathia\nwww.EnterpriseDB.com\n\nHi,Commit mentioned in the $subject changed the FirstBootstrapObjectId(transam.h) from 12000 to 13000.  I was trying to understand the reasonbehind this change, but was not able to gather that information. Also didn'tfind anything in the commit message either.Can you please explain those changes? Is it accidental or intentional?Thanks,Rushabh Lathiawww.EnterpriseDB.com", "msg_date": "Thu, 15 Apr 2021 19:33:29 +0530", "msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>", "msg_from_op": true, "msg_subject": "Commit ab596105b55 - BRIN minmax-multi indexes" }, { "msg_contents": "Rushabh Lathia <rushabh.lathia@gmail.com> writes:\n> Commit mentioned in the $subject changed the FirstBootstrapObjectId\n> (transam.h) from 12000 to 13000. I was trying to understand the reason\n> behind this change, but was not able to gather that information. Also didn't\n> find anything in the commit message either.\n\nAs of right now, genbki.pl's OID counter reaches 12036, so it's\npretty clear that 12000 no longer works. (I have this figure in\nmy head because I noted it while working on [1].) 13000 might\nwell be an excessive jump though. Do you have a concrete problem\nwith it?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/3737988.1618451008@sss.pgh.pa.us\n\n\n", "msg_date": "Thu, 15 Apr 2021 10:19:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Commit ab596105b55 - BRIN minmax-multi indexes" }, { "msg_contents": "On Thu, Apr 15, 2021 at 7:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Rushabh Lathia <rushabh.lathia@gmail.com> writes:\n> > Commit mentioned in the $subject changed the FirstBootstrapObjectId\n> > (transam.h) from 12000 to 13000. I was trying to understand the reason\n> > behind this change, but was not able to gather that information. Also\n> didn't\n> > find anything in the commit message either.\n>\n> As of right now, genbki.pl's OID counter reaches 12036, so it's\n> pretty clear that 12000 no longer works. (I have this figure in\n> my head because I noted it while working on [1].) 13000 might\n> well be an excessive jump though. Do you have a concrete problem\n> with it?\n>\n\nIn EDB Advance Server, it has their own set of system objects. Due\nto mentioned commit (where it changes the FirstBootstrapObjectId to 13000),\nnow system objects exceeding the FirstNormalObjectId.\n\n\n> regards, tom lane\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/3737988.1618451008@sss.pgh.pa.us\n>\n\n\n-- \nRushabh Lathia\n\nOn Thu, Apr 15, 2021 at 7:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Rushabh Lathia <rushabh.lathia@gmail.com> writes:\n> Commit mentioned in the $subject changed the FirstBootstrapObjectId\n> (transam.h) from 12000 to 13000.  I was trying to understand the reason\n> behind this change, but was not able to gather that information. Also didn't\n> find anything in the commit message either.\n\nAs of right now, genbki.pl's OID counter reaches 12036, so it's\npretty clear that 12000 no longer works.  (I have this figure in\nmy head because I noted it while working on [1].)  13000 might\nwell be an excessive jump though.  Do you have a concrete problem\nwith it?In EDB Advance Server, it has their own set of system objects.  Dueto mentioned commit (where it changes the FirstBootstrapObjectId to 13000),now system objects exceeding the FirstNormalObjectId.  \n\n                        regards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/3737988.1618451008@sss.pgh.pa.us\n-- Rushabh Lathia", "msg_date": "Thu, 15 Apr 2021 19:59:01 +0530", "msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Commit ab596105b55 - BRIN minmax-multi indexes" }, { "msg_contents": "Hi,\n\nOn 4/15/21 4:03 PM, Rushabh Lathia wrote:\n> Hi,\n> \n> Commit mentioned in the $subject changed the FirstBootstrapObjectId\n> (transam.h) from 12000 to 13000.  I was trying to understand the reason\n> behind this change, but was not able to gather that information. Also didn't\n> find anything in the commit message either.\n> \n> Can you please explain those changes? Is it accidental or intentional?\n> \n\nYeah, it's an intentional change - I should have mentioned it explicitly\nin the thread, probably.\n\nWe're assigning OIDs to catalog entries, at different phases, and each\nphase has a range or OIDs to ensure the values are unique. The first\nphase is genkbi.pl which transforms the .dat files, assigns OIDs in the\n[FirstGenbkiObjectId, FirstBootstrapObjectId) range.\n\nHowever, patches are adding new stuff to the .dat files, so we may hit\nthe upper limit. The minmax patch happened to add enough new entries to\nhit it, i.e. the genbki.pl needed OIDs above FirstBootstrapObjectId and\nthe compilation would fail. Try lowering the value back to 12000 and run\nrun \"make check\" again - it'll fail.\n\nThe limits are mostly arbitrary, the primary purpose is to ensure the\nOIDs are unique etc. So the patch simply added 1000 values to the genbki\nrange, to fix this.\n\nNot sure what'll happen once we fill all those ranges, but we're quite\nfar from that, I think. It took us ~20 years to get 2000 OIDs in the\ngenbki range, and the bootstrap has ~1000 OIDs. So we've used only about\nhalf the values between 10k and 16k, so far ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 15 Apr 2021 16:39:50 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Commit ab596105b55 - BRIN minmax-multi indexes" }, { "msg_contents": "On 4/15/21 4:29 PM, Rushabh Lathia wrote:\n> \n> \n> On Thu, Apr 15, 2021 at 7:49 PM Tom Lane <tgl@sss.pgh.pa.us\n> <mailto:tgl@sss.pgh.pa.us>> wrote:\n> \n> Rushabh Lathia <rushabh.lathia@gmail.com\n> <mailto:rushabh.lathia@gmail.com>> writes:\n> > Commit mentioned in the $subject changed the FirstBootstrapObjectId\n> > (transam.h) from 12000 to 13000.  I was trying to understand the\n> reason\n> > behind this change, but was not able to gather that information.\n> Also didn't\n> > find anything in the commit message either.\n> \n> As of right now, genbki.pl <http://genbki.pl>'s OID counter reaches\n> 12036, so it's\n> pretty clear that 12000 no longer works.  (I have this figure in\n> my head because I noted it while working on [1].)  13000 might\n> well be an excessive jump though.  Do you have a concrete problem\n> with it?\n> \n\nYeah, the bump from 12000 to 13000 might be unnecessarily large. But\nconsidering the bootstrap uses only about 1000 OIDs from the >=13000\nrange, I don't see this as a problem. Surely we can move the ranges in\nthe future, if needed?\n\n> \n> In EDB Advance Server, it has their own set of system objects.  Due\n> to mentioned commit (where it changes the FirstBootstrapObjectId to 13000),\n> now system objects exceeding the FirstNormalObjectId.  \n> \n\nI haven't checked what the EDBAS does exactly, but how could it hit\n16384 because of custom catalogs? I haven't checked what exactly is\nEDBAS doing, but surely it does not have thousands of catalogs, right?\n\nIt's OK to lower FirstBootstrapObjectId to e.g. 12500 during the merge,\nif that solves the issue for EDBAS. As I said, those ranges are mostly\narbitrary anyway, and EDBAS already has catalogs differences.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 15 Apr 2021 17:01:51 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Commit ab596105b55 - BRIN minmax-multi indexes" }, { "msg_contents": "Rushabh Lathia <rushabh.lathia@gmail.com> writes:\n> On Thu, Apr 15, 2021 at 7:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> As of right now, genbki.pl's OID counter reaches 12036, so it's\n>> pretty clear that 12000 no longer works. (I have this figure in\n>> my head because I noted it while working on [1].) 13000 might\n>> well be an excessive jump though. Do you have a concrete problem\n>> with it?\n\n> In EDB Advance Server, it has their own set of system objects. Due\n> to mentioned commit (where it changes the FirstBootstrapObjectId to 13000),\n> now system objects exceeding the FirstNormalObjectId.\n\nYou might want to rethink where you're allocating those OIDs. Even if\nwe didn't move FirstBootstrapObjectId today, it's inevitably going to\ncreep up over time.\n\nAs I recall the discussions about this, we'd expected that add-on products\nthat need OIDs in the bootstrap range would take them from the 8K-10K\nrange, not above FirstBootstrapObjectId. Because of the possibility of\nhaving lots of system locales creating lots of collations, the amount of\navailable OID space above FirstBootstrapObjectId is not as predictable as\nyou might wish. (I suspect eventually we're going to have to back off\nthe idea of creating every possible locale at bootstrap, but we haven't\naddressed that yet.)\n\nWe are overlapping development use of the 8K-10K OID range with it being\navailable for add-ons post-release, which might make it hard to do testing\nagainst HEAD. But you could renumber the not-yet-frozen objects' IDs out\nof the way whenever you want to make a merge.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 15 Apr 2021 11:10:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Commit ab596105b55 - BRIN minmax-multi indexes" } ]
[ { "msg_contents": "Forking https://www.postgresql.org/message-id/20210328231433.GI15100@telsasoft.com\n\nI gave suggestion how to reduce the \"lines of diff\" metric almost to nothing,\nallowing a very small \"fudge factor\", and which I think makes this a pretty\ngood metric rather than a passable one.\n\nThoughts ?\n\nOn Sun, Mar 28, 2021 at 06:14:33PM -0500, Justin Pryzby wrote:\n> On Sun, Mar 28, 2021 at 04:48:29PM -0400, Andrew Dunstan wrote:\n> > Nothing is hidden here - the diffs are reported, see for example\n> > <https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2021-03-28%2015%3A37%3A07&stg=xversion-upgrade-REL9_4_STABLE-HEAD>\n> > What we're comparing here is target pg_dumpall against the original\n> > source vs target pg_dumpall against the upgraded source.\n> \n> The command being run is:\n> \n> https://github.com/PGBuildFarm/client-code/blob/master/PGBuild/Modules/TestUpgradeXversion.pm#L610\n> system( \"diff -I '^-- ' -u $upgrade_loc/origin-$oversion.sql \"\n> . \"$upgrade_loc/converted-$oversion-to-$this_branch.sql \"\n> . \"> $upgrade_loc/dumpdiff-$oversion 2>&1\");\n> ...\n> \tmy $difflines = `wc -l < $upgrade_loc/dumpdiff-$oversion`;\n> \n> where -I means: --ignore-matching-lines=RE\n> \n> I think wc -l should actually be grep -c '^[-+]'\n> otherwise context lines count for as much as diff lines.\n> You could write that with diff -U0 |wc -l, except the context is useful to\n> humans.\n> \n> With some more effort, the number of lines of diff can be very small, allowing\n> a smaller fudge factor. \n> \n> For upgrade from v10:\n> time make -C src/bin/pg_upgrade check oldsrc=`pwd`/10 oldbindir=`pwd`/10/tmp_install/usr/local/pgsql/bin\n> \n> $ diff -u src/bin/pg_upgrade/tmp_check/dump1.sql src/bin/pg_upgrade/tmp_check/dump2.sql |wc -l\n> 622\n> \n> Without context:\n> $ diff -u src/bin/pg_upgrade/tmp_check/dump1.sql src/bin/pg_upgrade/tmp_check/dump2.sql |grep -c '^[-+]'\n> 142\n> \n> Without comments:\n> $ diff -I '^-- ' -u src/bin/pg_upgrade/tmp_check/dump1.sql src/bin/pg_upgrade/tmp_check/dump2.sql |grep -c '^[-+]'\n> 130\n> \n> Without SET default stuff:\n> diff -I '^$' -I \"SET default_table_access_method = heap;\" -I \"^SET default_toast_compression = 'pglz';$\" -I '^-- ' -u /home/pryzbyj/src/postgres/src/bin/pg_upgrade/tmp_check/dump1.sql /home/pryzbyj/src/postgres/src/bin/pg_upgrade/tmp_check/dump2.sql |less |grep -c '^[-+]'\n> 117\n> \n> Without trigger function call noise:\n> diff -I \"^CREATE TRIGGER [_[:alnum:]]\\+ .* FOR EACH \\(ROW\\|STATEMENT\\) EXECUTE \\(PROCEDURE\\|FUNCTION\\)\" -I '^$' -I \"SET default_table_access_method = heap;\" -I \"^SET default_toast_compression = 'pglz';$\" -I '^-- ' -u /home/pryzbyj/src/postgres/src/bin/pg_upgrade/tmp_check/dump1.sql /home/pryzbyj/src/postgres/src/bin/pg_upgrade/tmp_check/dump2.sql |grep -c '^[-+]'\n> 11\n> \n> Maybe it's important not to totally ignore that, and instead perhaps clean up\n> the known/accepted changes like s/FUNCTION/PROCEDURE/:\n> \n> </home/pryzbyj/src/postgres/src/bin/pg_upgrade/tmp_check/dump2.sql sed '/^CREATE TRIGGER/s/FUNCTION/PROCEDURE/' |diff -I '^$' -I \"SET default_table_access_method = heap;\" -I \"^SET default_toast_compression = 'pglz';$\" -I '^-- ' -u /home/pryzbyj/src/postgres/src/bin/pg_upgrade/tmp_check/dump1.sql - |grep -c '^[-+]'\n> 11\n> \n> It seems weird that we don't quote \"heap\" but we quote tablespaces and not\n> toast compression methods.\n\n\n", "msg_date": "Thu, 15 Apr 2021 10:37:22 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "buildfarm xversion diff" } ]
[ { "msg_contents": "|commit 7a50bb690b4837d29e715293c156cff2fc72885c\n|Author: Andres Freund <andres@anarazel.de>\n|Date: Fri Mar 16 23:13:12 2018 -0700\n|\n| Add 'unit' parameter to ExplainProperty{Integer,Float}.\n| \n| This allows to deduplicate some existing code, but mainly avoids some\n| duplication in upcoming commits.\n| \n| In passing, fix variable names indicating wrong unit (seconds instead\n| of ms).\n| \n| Author: Andres Freund\n| Discussion: https://postgr.es/m/20180314002740.cah3mdsonz5mxney@alap3.anarazel.de\n\n@@ -1304,8 +1299,8 @@ ExplainNode(PlanState *planstate, List *ancestors,\n planstate->instrument && planstate->instrument->nloops > 0)\n {\n double nloops = planstate->instrument->nloops;\n- double startup_sec = 1000.0 * planstate->instrument->startup / nloops;\n- double total_sec = 1000.0 * planstate->instrument->total / nloops;\n+ double startup_ms = 1000.0 * planstate->instrument->startup / nloops;\n+ double total_ms = 1000.0 * planstate->instrument->total / nloops;\n...\n if (es->timing)\n {\n- ExplainPropertyFloat(\"Actual Startup Time\", startup_sec, 3, es);\n- ExplainPropertyFloat(\"Actual Total Time\", total_sec, 3, es);\n+ ExplainPropertyFloat(\"Actual Startup Time\", \"s\", startup_ms,\n+ 3, es);\n+ ExplainPropertyFloat(\"Actual Total Time\", \"s\", total_ms,\n+ 3, es);\n\nThere's 3 pairs of these, and the other two pairs use \"ms\":\n\n$ git grep 'Actual.*Time' src/backend/commands/explain.c \nsrc/backend/commands/explain.c: ExplainPropertyFloat(\"Actual Startup Time\", \"s\", startup_ms,\nsrc/backend/commands/explain.c: ExplainPropertyFloat(\"Actual Total Time\", \"s\", total_ms,\nsrc/backend/commands/explain.c: ExplainPropertyFloat(\"Actual Startup Time\", \"ms\", 0.0, 3, es);\nsrc/backend/commands/explain.c: ExplainPropertyFloat(\"Actual Total Time\", \"ms\", 0.0, 3, es);\nsrc/backend/commands/explain.c: ExplainPropertyFloat(\"Actual Startup Time\", \"ms\",\nsrc/backend/commands/explain.c: ExplainPropertyFloat(\"Actual Total Time\", \"ms\",\n\nText mode uses appendStringInfo() for high density output, so this only affects\nnon-text output, but it turns out that units aren't shown for nontext format\nanyway - this seems like a deficiency, but it means there's no visible bug.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 15 Apr 2021 11:38:46 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "wrong units in ExplainPropertyFloat" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> Text mode uses appendStringInfo() for high density output, so this only affects\n> non-text output, but it turns out that units aren't shown for nontext format\n> anyway - this seems like a deficiency, but it means there's no visible bug.\n\nYeah, I concur: these should say \"ms\", but it's only latent so it's\nnot surprising nobody's noticed. Pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Apr 2021 11:32:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: wrong units in ExplainPropertyFloat" } ]
[ { "msg_contents": "-- Targeting PG15; if too early / noise then please ignore.\n\nI've noticed there are a lot of places in the btree index\ninfrastructure (and also some other index AMs) that effectively\niterate over the attributes of the index tuple, but won't use\nindex_deform_tuple for reasons. However, this implies that they must\nrepeatedly call index_getattr, which in the worst case is O(n) for the\nn-th attribute, slowing down extraction of multi-column indexes\nsignificantly. As such, I've added some API that allows for iteration\n(ish) over index attributes.\n\nPlease find attached patch 0001 that improves the runtime complexity\nof many of these places by storing and reusing the offset of the last\nextracted attribute. This improves the worst-case runtime of\nextracting all attributes to O(n) for incremental attribute extraction\n(from O(n*n)). Note that finding the first offsets is still an O(n)\nworst case for starting at the n-th attribute, but nothing can be done\nabout that.\n\nAlmost all workloads for multi-column nbtree indexes that cannot use\nattcacheoff should see a benefit from this patch; only those that only\nuse row scans cannot use this optimization. Additionally, multi-column\ngist indexes could also see some (albeit limited) benefit, which is\nindeed useful when considering the newly added INCLUDE support in the\ngist AM.\n\nAlso attached is 0002, which dynamically truncates attribute prefixes\nof tuples whilst _binsrch-ing through a nbtree page. It greatly uses\nthe improved performance of 0001; they work very well together. The\nproblems that Peter (cc-ed) mentions in [0] only result in invalid\nsearch bounds when traversing the tree, but on the page level valid\nbounds can be constructed.\n\nThis is patchset 1 of a series of patches I'm starting for eventually\nadding static prefix truncation into nbtree infrastructure in\nPostgreSQL. I've put up a wiki page [1] with my current research and\nthoughts on that topic.\n\nPerformance\n-----------\n\nI've run some tests with regards to performance on my laptop; which\ntests nbtree index traversal. The test is based on a recent UK land\nregistry sales prices dataset (25744780 rows), being copied from one\ntable into an unlogged table with disabled autovacuum, with one index\nas specified by the result. Master @ 99964c4a, patched is with both\n0001 and 0002. The results are averages over 3 runs, with plain\nconfigure, compiled by gcc (Debian 6.3.0-18+deb9u1).\n\nINSERT (index definition) | master (s) | patched (s) | improv(%)\nUNIQUE (transaction) | 256851 | 251705 | 2.00\n(county, city, locality) | 154529 | 147495 | 4.55\n(county COLLATE \"en_US\", city, locality) | 174028 | 164165 | 5.67\n(always_null, county, city, locality) | 173090 | 166851 | 3.60\n\nSome testing for reindex indicates improvements there as well: Same\ncompiled version; all indexes on an unlogged table; REINDEX run 4\ntimes on each index, last 3 were averaged.\n\nREINDEX (index definition) | master (s) | patched (s) | improv(%)\nUNIQUE (transaction) | 11623 | 11692 | -0.6\n(county, city, locality) | 58299 | 54770 | 6.1\n(county COLLATE \"en_US\", city, locality) | 61790 | 55887 | 9.6\n(always_null, county, city, locality) | 69703 | 63925 | 8.3\n\nI am quite suprised with the results for the single-column unique\nindex insertions, as that was one of the points where I was suspecting\na slight decrease in performance for inserts. I haven't really checked\nwhy the performance increased, but I suspect it has to do with an\nimproved fast-path for finding the first attribute (we know it always\nstarts at offset 0 of the data section), but it might also just as\nwell be due to throttling (sadly, I do not have a stable benchmarking\nmachine, so my laptop will do).\n\nI'm also slightly disappointed with the results of the always_null\ninsert load; I had hoped for better results there, seeing the results\nfor the other 2 multi-column indexes.\n\n\nWith regards,\n\nMatthias van de Meent.\n\n[0] https://www.postgresql.org/message-id/CAH2-Wzn_NAyK4pR0HRWO0StwHmxjP5qyu+X8vppt030XpqrO6w@mail.gmail.com\n[1] https://wiki.postgresql.org/wiki/NBTree_Prefix_Truncation", "msg_date": "Thu, 15 Apr 2021 20:06:34 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Iterating on IndexTuple attributes and nbtree page-level dynamic\n prefix truncation" }, { "msg_contents": "On Thu, Apr 15, 2021 at 11:06 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> I've noticed there are a lot of places in the btree index\n> infrastructure (and also some other index AMs) that effectively\n> iterate over the attributes of the index tuple, but won't use\n> index_deform_tuple for reasons. However, this implies that they must\n> repeatedly call index_getattr, which in the worst case is O(n) for the\n> n-th attribute, slowing down extraction of multi-column indexes\n> significantly. As such, I've added some API that allows for iteration\n> (ish) over index attributes.\n\nInteresting approach. I think that in an ideal world we would have a\ntuple format with attribute lengths/offsets right in the header. But\nit's too late for that, so other approaches seem worth considering.\n\n> Also attached is 0002, which dynamically truncates attribute prefixes\n> of tuples whilst _binsrch-ing through a nbtree page. It greatly uses\n> the improved performance of 0001; they work very well together. The\n> problems that Peter (cc-ed) mentions in [0] only result in invalid\n> search bounds when traversing the tree, but on the page level valid\n> bounds can be constructed.\n>\n> This is patchset 1 of a series of patches I'm starting for eventually\n> adding static prefix truncation into nbtree infrastructure in\n> PostgreSQL. I've put up a wiki page [1] with my current research and\n> thoughts on that topic.\n\nThe idea of making _bt_truncate() produce new leaf page high keys\nbased on the lastleft tuple rather than the firstright tuple (i.e.\n+inf truncated attribute values rather than the current -inf) seems\nlike a non-starter. As you point out in \"1.) Suffix-truncation; -INF\nin high keys\" on the Postgres wiki page, the current approach\ntruncates firstright (not lastleft), making the left page's new high\nkey contain what you call a 'foreign' value. But I see that as a big\nadvantage of the current approach.\n\nConsider, for example, the nbtree high key \"continuescan\" optimization\nadded by commit 29b64d1d. The fact that leaf page high keys are\ngenerated in this way kind of allows us to \"peak\" on the page to the\nimmediate right before actually visiting it -- possibly without ever\nvisiting it (which is where the benefit of that particular\noptimization lies). _bt_check_unique() uses a similar trick. After the\nPostgres 12 work, _bt_check_unique() will only visit a second page in\nthe extreme case where we cannot possibly fit all of the relevant\nversion duplicates on even one whole leaf page (i.e. practically\nnever). There is also cleverness inside _bt_compare() to make sure\nthat we handle the boundary cases perfectly while descending the tree.\n\nYou might also consider how the nbtsplitloc.c code works with\nduplicates, and how that would be affected by +inf truncated\nattributes. The leaf-page-packing performed in the SPLIT_SINGLE_VALUE\ncase only goes ahead when the existing high key confirms that this\nmust be the rightmost page. Now, I guess that you could still do\nsomething like that if we switched to +inf semantics. But, the fact\nthat the new right page will have a 'foreign' value in the\nSPLIT_SINGLE_VALUE-split case is also of benefit -- it is practically\nempty right after the split (since the original/left page is packed\nfull), and we want this empty space to be eligible to either take more\nduplicates, or to take values that may happen to fit between the\nhighly duplicated value and the original foreign high key value. We\nwant that flexibility, I think.\n\nI also find -inf much more natural. If in the future we teach nbtree\nto truncate \"inside\" text attributes (say text columns), you'd pretty\nmuch be doing the same thing at the level of characters rather than\nwhole columns. The -inf semantics are like strcmp() semantics.\n\nIf you're going to pursue full prefix compression anyway, maybe you\nshould use a low key on the leaf level in cases where the optimization\nis in use. This creates complexity during page deletion, because the\nlow key in the subtree to the right of the deletion target subtree may\nneed to be updated. Perhaps you can find a way to make that work that\nisn't too complicated.\n\n> I've run some tests with regards to performance on my laptop; which\n> tests nbtree index traversal. The test is based on a recent UK land\n> registry sales prices dataset (25744780 rows), being copied from one\n> table into an unlogged table with disabled autovacuum, with one index\n> as specified by the result. Master @ 99964c4a, patched is with both\n> 0001 and 0002. The results are averages over 3 runs, with plain\n> configure, compiled by gcc (Debian 6.3.0-18+deb9u1).\n\nYou should probably account for index size here. I have lots of my own\ntests for space utilization, using data from a variety of sources.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 16 Apr 2021 09:03:01 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Iterating on IndexTuple attributes and nbtree page-level dynamic\n prefix truncation" }, { "msg_contents": "On Fri, 16 Apr 2021 at 18:03, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Thu, Apr 15, 2021 at 11:06 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > I've noticed there are a lot of places in the btree index\n> > infrastructure (and also some other index AMs) that effectively\n> > iterate over the attributes of the index tuple, but won't use\n> > index_deform_tuple for reasons. However, this implies that they must\n> > repeatedly call index_getattr, which in the worst case is O(n) for the\n> > n-th attribute, slowing down extraction of multi-column indexes\n> > significantly. As such, I've added some API that allows for iteration\n> > (ish) over index attributes.\n>\n> Interesting approach. I think that in an ideal world we would have a\n> tuple format with attribute lengths/offsets right in the header.\n\nI believe that that would indeed be ideal w.r.t. access speed, but\nalso quite expensive w.r.t. amount of data stored. This would add 2\nbytes per attribute in the current infrastructure (11 bits at least\nfor each attribute to store offsets), on the current 12 bytes of\noverhead per indextuple (= 8 for IndexTuple + 4 for ItemId). That is\nprobably always going to be a non-starter, seeing that we can\nrelatively easily optimize our current attribute access patterns.\n\n> But\n> it's too late for that, so other approaches seem worth considering.\n\nYep.\n\n> > Also attached is 0002, which dynamically truncates attribute prefixes\n> > of tuples whilst _binsrch-ing through a nbtree page. It greatly uses\n> > the improved performance of 0001; they work very well together. The\n> > problems that Peter (cc-ed) mentions in [0] only result in invalid\n> > search bounds when traversing the tree, but on the page level valid\n> > bounds can be constructed.\n> >\n> > This is patchset 1 of a series of patches I'm starting for eventually\n> > adding static prefix truncation into nbtree infrastructure in\n> > PostgreSQL. I've put up a wiki page [1] with my current research and\n> > thoughts on that topic.\n>\n> The idea of making _bt_truncate() produce new leaf page high keys\n> based on the lastleft tuple rather than the firstright tuple (i.e.\n> +inf truncated attribute values rather than the current -inf) seems\n> like a non-starter. As you point out in \"1.) Suffix-truncation; -INF\n> in high keys\" on the Postgres wiki page, the current approach\n> truncates firstright (not lastleft), making the left page's new high\n> key contain what you call a 'foreign' value. But I see that as a big\n> advantage of the current approach.\n>\n> Consider, for example, the nbtree high key \"continuescan\" optimization\n> added by commit 29b64d1d. The fact that leaf page high keys are\n> generated in this way kind of allows us to \"peak\" on the page to the\n> immediate right before actually visiting it -- possibly without ever\n> visiting it (which is where the benefit of that particular\n> optimization lies). _bt_check_unique() uses a similar trick. After the\n> Postgres 12 work, _bt_check_unique() will only visit a second page in\n> the extreme case where we cannot possibly fit all of the relevant\n> version duplicates on even one whole leaf page (i.e. practically\n> never). There is also cleverness inside _bt_compare() to make sure\n> that we handle the boundary cases perfectly while descending the tree.\n\nI understand and appreciate that the \"-INF\" truncation that is\ncurrently in place is being relied upon in quite some places. Part of\nthe effort for \"+INF\" truncation would be determining where and how to\nkeep the benefits of the \"-INF\" truncation. I also believe that for\ninternal pages truncating to \"+INF\" would be perfectly fine; the\noptimizations that I know of only rely on it at the leaf level.\nCompletely seperate from that, there's no reason (except for a\npotential lack of unused bits) we can't flag suffix-truncated columns\nas either \"+INF\" or \"-INF\" - that would allow us to apply each where\nuseful.\n\n> You might also consider how the nbtsplitloc.c code works with\n> duplicates, and how that would be affected by +inf truncated\n> attributes. The leaf-page-packing performed in the SPLIT_SINGLE_VALUE\n> case only goes ahead when the existing high key confirms that this\n> must be the rightmost page. Now, I guess that you could still do\n> something like that if we switched to +inf semantics. But, the fact\n> that the new right page will have a 'foreign' value in the\n> SPLIT_SINGLE_VALUE-split case is also of benefit -- it is practically\n> empty right after the split (since the original/left page is packed\n> full), and we want this empty space to be eligible to either take more\n> duplicates, or to take values that may happen to fit between the\n> highly duplicated value and the original foreign high key value. We\n> want that flexibility, I think.\n>\n> I also find -inf much more natural. If in the future we teach nbtree\n> to truncate \"inside\" text attributes (say text columns), you'd pretty\n> much be doing the same thing at the level of characters rather than\n> whole columns. The -inf semantics are like strcmp() semantics.\n\nYes, I also read and appreciate your comments on +inf vs -inf when\nthis came up in [0]. However, if we could choose, I think that having\nboth options could be quite beneficial, especially when dealing with\nmany duplicates or duplicate prefixes.\n\n> If you're going to pursue full prefix compression anyway, maybe you\n> should use a low key on the leaf level in cases where the optimization\n> is in use. This creates complexity during page deletion, because the\n> low key in the subtree to the right of the deletion target subtree may\n> need to be updated. Perhaps you can find a way to make that work that\n> isn't too complicated.\n\nThat would be an interesting research path as well, the cost/benefit\nanalysis would be much trickier when comparing to the status quo.\n\n> You should probably account for index size here. I have lots of my own\n> tests for space utilization, using data from a variety of sources.\n\nI'd like to mention that the current (and measured) patchset only does\n_logical_ dynamic prefix truncation, not the physical prefix\ntruncation that is described on the wiki page. Physical prefix\ntruncation will probably be a summer / fall project, and I will indeed\nat some point need to build a test suite that would measure the\nbenefits, but for this patch I do not see the need for benchmarks on\nsize, as that is not the point of these patches. These patches are\nuseful on their own for multi-key-column btree performance (and some\nGIST), regardless of later patches implementing physical dynamic\nprefix truncation in the btree AM.\n\n\nWith regards,\n\nMatthias van de Meent\n\n[0] https://www.postgresql.org/message-id/CAH2-Wzm_Kxm26E_DwK7AR%2BZB_-B50OMpGoO%3Dn08tD%2BqH%3DMD-zw%40mail.gmail.com\n[1] https://www.postgresql.org/message-id/CAH2-Wzn_NAyK4pR0HRWO0StwHmxjP5qyu+X8vppt030XpqrO6w@mail.gmail.com\n\n\n", "msg_date": "Fri, 16 Apr 2021 23:20:36 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Iterating on IndexTuple attributes and nbtree page-level dynamic\n prefix truncation" }, { "msg_contents": "On Fri, Apr 16, 2021 at 2:20 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> > Interesting approach. I think that in an ideal world we would have a\n> > tuple format with attribute lengths/offsets right in the header.\n>\n> I believe that that would indeed be ideal w.r.t. access speed, but\n> also quite expensive w.r.t. amount of data stored. This would add 2\n> bytes per attribute in the current infrastructure (11 bits at least\n> for each attribute to store offsets), on the current 12 bytes of\n> overhead per indextuple (= 8 for IndexTuple + 4 for ItemId). That is\n> probably always going to be a non-starter, seeing that we can\n> relatively easily optimize our current attribute access patterns.\n\nI don't think that that's why it's a non-starter. This design assumes\na world in which everything has already been optimized for this\nlayout. You no longer get to store the varlena header inline, which\nwould break a lot of things in Postgres were it ever to be attempted.\nThe space efficiency issues don't really apply because you have an\noffset for fixed-length types -- their presence is always implied. I\nthink that you need to encode NULLs differently, which is a lot less\nspace efficient when there are a lot of NULLs. But on the whole this\ndesign seems more efficient than what we have currently.\n\nThis representation of index tuples would be a totally reasonable\ndesign were we in a green field situation. (Which is pretty far from\nthe situation we're actually in, of course.)\n\n> I understand and appreciate that the \"-INF\" truncation that is\n> currently in place is being relied upon in quite some places. Part of\n> the effort for \"+INF\" truncation would be determining where and how to\n> keep the benefits of the \"-INF\" truncation. I also believe that for\n> internal pages truncating to \"+INF\" would be perfectly fine; the\n> optimizations that I know of only rely on it at the leaf level.\n\nI don't doubt that there is nothing special about -inf from a key\nspace point of view. Actually...you could say that -inf is special to\nthe limited extent that we know it only appears in pivot tuples and\nexploit that property when the !pivotsearch case/optimization is used.\nBut that isn't much of an exception at a high level, so whatever.\n\nAnyway, it follows that +inf could in principle be used instead in\nsome or all cases -- all that is truly essential for correctness is\nthat the invariants always be respected. We're still in agreement up\nuntil here.\n\n> Yes, I also read and appreciate your comments on +inf vs -inf when\n> this came up in [0].\n\nI'm impressed that you've done your homework on this.\n\n> However, if we could choose, I think that having\n> both options could be quite beneficial, especially when dealing with\n> many duplicates or duplicate prefixes.\n\nThis is where things are much less clear -- maybe we're not in\nagreement here. Who knows, though -- maybe you're right. But you\nhaven't presented any kind of argument. I understand that it's hard to\narticulate what effects might be in play with stuff like this, so I\nwon't force the issue now. Strong evidence is of course the only way\nthat you'll reliably convince me of this.\n\nI should point out that I am a little confused about how this +inf\nbusiness could be both independently useful and pivotal to\nimplementing [dynamic] prefix truncation/compression. Seems...weird to\ndiscuss them together, except maybe to mention in passing that this\n+inf thing is notable for particularly helping dynamic prefix stuff --\nwhich is it?\n\nIt is my strong preference that nbtsplitloc.c continue to know\napproximately nothing about compression or deduplication. While it is\ntrue that nbtsplitloc.c's _bt_recsplitloc() is aware of posting lists,\nthis is strictly an optimization that is only justified by the fact\nthat posting lists are sometimes very large, and therefore worth\nconsidering directly -- just to get a more accurate idea of how a\nrelevant split point choice affects the balance of free space (we\ndon't bother to do the same thing with non-key INCLUDE columns because\nthey're generally small and equi-sized). And so this _bt_recsplitloc()\nthing no exception to the general rule, which is:\ndeduplication/posting list maintenance should be *totally* orthogonal\nto the page split choice logic (the design of posting list splits\nhelps a lot with that). We can afford to have complicated split point\nchoice logic because the question of which split point is optimal is\ntotally decoupled from the question of which are correct -- in\nparticular, from the correctness of the space accounting used to\ngenerate candidate split points.\n\nIt may interest you to know that I once thought that it would be nice\nto have the *option* of +inf too, so that we could use it in very rare\ncases like the pathological SPLIT_MANY_DUPLICATES case that\n_bt_bestsplitloc() has some defenses against. It would perhaps be nice\nif we could use +inf selectively in that case. I never said anything\nabout this publicly before now, mostly because it wasn't that\nimportant -- pathological behaviors like this have never been reported\non by users a full 18 months after the release of 12.0, so it's\nunlikely to be a real concern.\n\n> > If you're going to pursue full prefix compression anyway, maybe you\n> > should use a low key on the leaf level in cases where the optimization\n> > is in use. This creates complexity during page deletion, because the\n> > low key in the subtree to the right of the deletion target subtree may\n> > need to be updated. Perhaps you can find a way to make that work that\n> > isn't too complicated.\n>\n> That would be an interesting research path as well, the cost/benefit\n> analysis would be much trickier when comparing to the status quo.\n\nI'd say that's unclear right now.\n\n> > You should probably account for index size here. I have lots of my own\n> > tests for space utilization, using data from a variety of sources.\n>\n> I'd like to mention that the current (and measured) patchset only does\n> _logical_ dynamic prefix truncation, not the physical prefix\n> truncation that is described on the wiki page.\n\nIf you change how _bt_truncate() behaves in any way (e.g. sometimes\nit's lastleft/+inf based now), and nothing else, you're still bound to\nchange the space utilization with the tests that I maintain -- though\nperhaps only at the level of noise. I sometimes call these tests \"wind\ntunnel tests\". It turns out that you can simulate rather a lot about a\nreal complicated workload with simple, deterministic, serial test\ncases -- provided you're only interested in the space utilization.\nThis helped a lot for both the Postgres 12 and Postgres 13 stuff\n(though not the Postgres 14 stuff).\n\n> These patches are\n> useful on their own for multi-key-column btree performance (and some\n> GIST), regardless of later patches implementing physical dynamic\n> prefix truncation in the btree AM.\n\nHave you isolated the performance impact of the first patch at all?\nCan you characterize how well it works on its own, perhaps just\ninformally? It would be convenient if the first patch could be treated\nas an independent thing.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 16 Apr 2021 16:05:06 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Iterating on IndexTuple attributes and nbtree page-level dynamic\n prefix truncation" }, { "msg_contents": "On Sat, 17 Apr 2021 at 01:05, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Fri, Apr 16, 2021 at 2:20 PM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > > Interesting approach. I think that in an ideal world we would have a\n> > > tuple format with attribute lengths/offsets right in the header.\n> >\n> > I believe that that would indeed be ideal w.r.t. access speed, but\n> > also quite expensive w.r.t. amount of data stored. This would add 2\n> > bytes per attribute in the current infrastructure (11 bits at least\n> > for each attribute to store offsets), on the current 12 bytes of\n> > overhead per indextuple (= 8 for IndexTuple + 4 for ItemId). That is\n> > probably always going to be a non-starter, seeing that we can\n> > relatively easily optimize our current attribute access patterns.\n>\n> I don't think that that's why it's a non-starter. This design assumes\n> a world in which everything has already been optimized for this\n> layout. You no longer get to store the varlena header inline, which\n> would break a lot of things in Postgres were it ever to be attempted.\n> The space efficiency issues don't really apply because you have an\n> offset for fixed-length types -- their presence is always implied. I\n> think that you need to encode NULLs differently, which is a lot less\n> space efficient when there are a lot of NULLs. But on the whole this\n> design seems more efficient than what we have currently.\n\nI believe that that depends on your definition of 'efficiency'. For\nstorage efficiency, the current design is quite good (except for the\nvarlena header size of 4 bytes for attributes > 127 bytes, which could\nbe 2 bytes because pages can not be larger than 64kiB (actually 32kiB)\nwith our current design, all attributes use just about the least data\npossible). For access efficiency / code complexity, you're probably\nright that storing attribute offsets in the tuple header is\npreferable, but such design would still need some alignment calls, or\nstore the length of attributes as well to prevent reading the\nalignment padding of the next attribute into the variable length\nattribute at the additional overhead of up to 2 bytes per attribute.\n\n> This representation of index tuples would be a totally reasonable\n> design were we in a green field situation. (Which is pretty far from\n> the situation we're actually in, of course.)\n\nThat might indeed be the case, assuming a green field with different\nor no processing architecture or storage limitations. CPU to storage\nbandwidth can be (and often is) a bottleneck, as well as alignment.\n\n> > I understand and appreciate that the \"-INF\" truncation that is\n> > currently in place is being relied upon in quite some places. Part of\n> > the effort for \"+INF\" truncation would be determining where and how to\n> > keep the benefits of the \"-INF\" truncation. I also believe that for\n> > internal pages truncating to \"+INF\" would be perfectly fine; the\n> > optimizations that I know of only rely on it at the leaf level.\n>\n> I don't doubt that there is nothing special about -inf from a key\n> space point of view. Actually...you could say that -inf is special to\n> the limited extent that we know it only appears in pivot tuples and\n> exploit that property when the !pivotsearch case/optimization is used.\n> But that isn't much of an exception at a high level, so whatever.\n>\n> Anyway, it follows that +inf could in principle be used instead in\n> some or all cases -- all that is truly essential for correctness is\n> that the invariants always be respected. We're still in agreement up\n> until here.\n\nAgreed\n\n> > Yes, I also read and appreciate your comments on +inf vs -inf when\n> > this came up in [0].\n>\n> I'm impressed that you've done your homework on this.\n>\n> > However, if we could choose, I think that having\n> > both options could be quite beneficial, especially when dealing with\n> > many duplicates or duplicate prefixes.\n>\n> This is where things are much less clear -- maybe we're not in\n> agreement here. Who knows, though -- maybe you're right. But you\n> haven't presented any kind of argument. I understand that it's hard to\n> articulate what effects might be in play with stuff like this, so I\n> won't force the issue now. Strong evidence is of course the only way\n> that you'll reliably convince me of this.\n>\n> I should point out that I am a little confused about how this +inf\n> business could be both independently useful and pivotal to\n> implementing [dynamic] prefix truncation/compression. Seems...weird to\n> discuss them together, except maybe to mention in passing that this\n> +inf thing is notable for particularly helping dynamic prefix stuff --\n> which is it?\n\nI agree that my reasoning might have been unclear and confusing.\n\nI mean that most benefits that we could receive from +inf would be in\nimproving the ability to apply [dynamic] prefix truncation on a page\nby limiting the keyspace of that page to 'local' values. If prefix\ntruncation is impossible / does not apply for some index (a single\nunique column !allequalimage index is a likely worst case scenario),\nthen applying +inf would potentially be detrimental to the performance\nof certain other optimizations (e.g. the continuescan optimization),\nin which case using -inf would probably be preferable. Ergo, I'm\nplanning on making _bt_recsplitloc aware of +inf and -inf after\nimplementing physical prefix truncation, and allow it to decide if and\nwhen each should be applied, if it turns out it consistently improves\nspace and/or time performance without significantly decreasing either.\n\n> It is my strong preference that nbtsplitloc.c continue to know\n> approximately nothing about compression or deduplication. While it is\n> true that nbtsplitloc.c's _bt_recsplitloc() is aware of posting lists,\n> this is strictly an optimization that is only justified by the fact\n> that posting lists are sometimes very large, and therefore worth\n> considering directly -- just to get a more accurate idea of how a\n> relevant split point choice affects the balance of free space (we\n> don't bother to do the same thing with non-key INCLUDE columns because\n> they're generally small and equi-sized). And so this _bt_recsplitloc()\n> thing no exception to the general rule, which is:\n> deduplication/posting list maintenance should be *totally* orthogonal\n> to the page split choice logic (the design of posting list splits\n> helps a lot with that). We can afford to have complicated split point\n> choice logic because the question of which split point is optimal is\n> totally decoupled from the question of which are correct -- in\n> particular, from the correctness of the space accounting used to\n> generate candidate split points.\n\nI would argue that it also knows about duplicate attributes and shared\nprefixes? It already optimizes (unintentionally?) for deduplication by\nchoosing split points between two runs of equal values. I believe that\nimplementing the same for prefixes (if not already in place) would not\nstand out too much. I think we can discuss that more extensively when\nwe actually have code that would benefit from that.\n\n> It may interest you to know that I once thought that it would be nice\n> to have the *option* of +inf too, so that we could use it in very rare\n> cases like the pathological SPLIT_MANY_DUPLICATES case that\n> _bt_bestsplitloc() has some defenses against. It would perhaps be nice\n> if we could use +inf selectively in that case. I never said anything\n> about this publicly before now, mostly because it wasn't that\n> important -- pathological behaviors like this have never been reported\n> on by users a full 18 months after the release of 12.0, so it's\n> unlikely to be a real concern.\n\nI do not per se disagree, but I should note that the amazing work on\nbtree page split prevention through 'heapkeyspace', deduplication and\neager tuple deletion have changed some key behaviours of btree index\npages. The same would likely occur once physical prefix truncation is\nimplemented, and in that case I believe that some decisions that were\npreviously non-problematic might need to be re-examined.\n\n> > > If you're going to pursue full prefix compression anyway, maybe you\n> > > should use a low key on the leaf level in cases where the optimization\n> > > is in use. This creates complexity during page deletion, because the\n> > > low key in the subtree to the right of the deletion target subtree may\n> > > need to be updated. Perhaps you can find a way to make that work that\n> > > isn't too complicated.\n> >\n> > That would be an interesting research path as well, the cost/benefit\n> > analysis would be much trickier when comparing to the status quo.\n>\n> I'd say that's unclear right now.\n\nI agree. My 'trickier' pointed to that \"adding an extra non-key tuple\nto the page\" needs solid understanding and reasoning about the use of\nthe AM to prove that it's worth the extra metadata on the page.\nProving that is, in my opinion, difficult.\n\n> > > You should probably account for index size here. I have lots of my own\n> > > tests for space utilization, using data from a variety of sources.\n> >\n> > I'd like to mention that the current (and measured) patchset only does\n> > _logical_ dynamic prefix truncation, not the physical prefix\n> > truncation that is described on the wiki page.\n>\n> If you change how _bt_truncate() behaves in any way (e.g. sometimes\n> it's lastleft/+inf based now), and nothing else, you're still bound to\n> change the space utilization with the tests that I maintain -- though\n> perhaps only at the level of noise. I sometimes call these tests \"wind\n> tunnel tests\". It turns out that you can simulate rather a lot about a\n> real complicated workload with simple, deterministic, serial test\n> cases -- provided you're only interested in the space utilization.\n> This helped a lot for both the Postgres 12 and Postgres 13 stuff\n> (though not the Postgres 14 stuff).\n\nI would be interested in running these benchmarks when I get to\nupdating the physical format. Good to know there are relatively easy\ntests available.\n\n> > These patches are\n> > useful on their own for multi-key-column btree performance (and some\n> > GIST), regardless of later patches implementing physical dynamic\n> > prefix truncation in the btree AM.\n>\n> Have you isolated the performance impact of the first patch at all?\n> Can you characterize how well it works on its own, perhaps just\n> informally?\n\nThe REINDEX performance results is the place where attribute iteration\nshines best, as the hot path in reindex is the tuple comparison which\nused index_getattr a lot in its hot path, and the dynamic prefix\ntruncation is not applicable there (yet?). Its time spent went down by\nover 6% for the indexes with 3 key columns of variable length, whereas\nthe indexes with only a single fixed-size attribute took only slightly\nlonger (+0.6% avg in 3 runs on a laptop, high variance). I have not\ntested it with GIST, but I believe that similar results are realistic\nthere as well for varlen attributes.\n\n> It would be convenient if the first patch could be treated\n> as an independent thing.\n\nPatch 0002 was the reason for writing 0001, and uses the performance\nimprovements of 0001 to show it's worth. As such, I submitted them as\na set. If you'd like, I could submit 0002 seperately?\n\nWith regards,\n\nMatthias van de Meent\n\n[+] instead of starting _binsrch with only the high key compare\nresult, we could also eagerly compare the search key to the lowest\nkey. This way, we have high+low bounds for the whole page, instead of\nhaving that only after finding a key < searchkey on the page. The\neffort might just as well not be worth it, as it is one extra key\ncompare (out of max 9 on a page, plus one highkey).\n\n\n", "msg_date": "Fri, 23 Apr 2021 12:45:45 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Iterating on IndexTuple attributes and nbtree page-level dynamic\n prefix truncation" }, { "msg_contents": "On Fri, 23 Apr 2021 at 12:45, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Sat, 17 Apr 2021 at 01:05, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> > It would be convenient if the first patch could be treated\n> > as an independent thing.\n>\n> Patch 0002 was the reason for writing 0001, and uses the performance\n> improvements of 0001 to show it's worth. As such, I submitted them as\n> a set. If you'd like, I could submit 0002 seperately?\n\nFor now, version 2 of the patchset to make MSVC and cfbot happy (only\nfixes the compilation issues, no significant changes). I'll try to\nbenchmark the patches in this patchset (both 0001, and 0001+0002) in\nthe upcoming weekend.\n\nKind regards,\n\nMatthias van de Meent.", "msg_date": "Thu, 17 Jun 2021 17:14:11 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Iterating on IndexTuple attributes and nbtree page-level dynamic\n prefix truncation" }, { "msg_contents": "On Thu, 17 Jun 2021 at 17:14, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> I'll try to\n> benchmark the patches in this patchset (both 0001, and 0001+0002) in\n> the upcoming weekend.\n\nSomewhat delayed, benchmark results are attached. These are based on 7\niterations of the attached benchmark script ('scratch.sql'), with the\nlatest 'UK Price Paid' dataset. Again, the index_test table is an\nunlogged copy of the land_registry_price_paid_uk table, with one\nadditional trailing always_null column.\n\nResults for 0001 are quite good in the target area of multi-column\nindexes in which attcacheoff cannot be used (2-4% for insertion\nworkloads, 4-12% for reindex workloads), but regresses slightly for\nthe single unique column insertion test, and are quite a bit worse for\nboth insert and reindex cases for the attcacheoff-enabled multi-column\nindex (4% and 18% respectively (!)).\n\nWith 0001+0002, further improvements are made in the target area (now\n4-7% for the various insertion workloads, 5-14% for reindex). The\nregression in the insert- and reindex-workload in attcacheoff-enabled\nmulti-column indexes is still substantial, but slightly less bad (down\nto a 2% and 15% degradation respectively).\n\nEvidently, this needs improvements in the (likely common)\nattcacheoff-enabled multi-column case; as I don't think we can\nreasonably commit a 10+% regression. I'll work on that this weekend.\n\n\nKind regards,\n\nMatthias van de Meent\n\n\nBenchmarks were all performed on WSL2 running Debian 10, on an AMD\n5950X, with shared_buffers = 15GB (which should fit the dataset three\ntimes), enable_indexscan = off, autovacuum disabled, and parallel\nworkers disabled on the tables, so that the results should be about as\nstable as it gets.", "msg_date": "Thu, 24 Jun 2021 18:21:45 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Iterating on IndexTuple attributes and nbtree page-level dynamic\n prefix truncation" }, { "msg_contents": "> On 24 Jun 2021, at 18:21, Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:\n> \n> On Thu, 17 Jun 2021 at 17:14, Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n>> \n>> I'll try to\n>> benchmark the patches in this patchset (both 0001, and 0001+0002) in\n>> the upcoming weekend.\n> \n> Somewhat delayed, benchmark results are attached.\n\nI'm moving this patch to the next CF to allow for more review of the latest\nrevision and benchmark results. It no longer applies though, so please post a\nrebased version.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 2 Dec 2021 11:21:59 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Iterating on IndexTuple attributes and nbtree page-level dynamic\n prefix truncation" } ]
[ { "msg_contents": "Hi,\n\nI recently noticed that ATTACH PARTITION also recursively locks the\ndefault partition with ACCESS EXCLUSIVE mode when its constraints do\nnot explicitly exclude the to-be-attached partition, which I couldn't\nfind documented (has been there since PG10 I believe).\n\nPFA a patch that documents just that.\n\nWith regards,\n\nMatthias van de Meent.", "msg_date": "Thu, 15 Apr 2021 20:47:26 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "ATTACH PARTITION locking documentation for DEFAULT partitions" }, { "msg_contents": "On Thu, Apr 15, 2021 at 08:47:26PM +0200, Matthias van de Meent wrote:\n> I recently noticed that ATTACH PARTITION also recursively locks the\n> default partition with ACCESS EXCLUSIVE mode when its constraints do\n> not explicitly exclude the to-be-attached partition, which I couldn't\n> find documented (has been there since PG10 I believe).\n\nI'm not sure it's what you're looking for, but maybe you saw:\nhttps://www.postgresql.org/docs/12/sql-altertable.html\n|The default partition can't contain any rows that would need to be moved to the\n|new partition, and will be scanned to verify that none are present. This scan,\n|like the scan of the new partition, can be avoided if an appropriate\n|<literal>CHECK</literal> constraint is present.\n\nAnd since 2a4d96ebb:\n|Attaching a partition acquires a SHARE UPDATE EXCLUSIVE lock on the parent table, in addition to ACCESS EXCLUSIVE locks on the table to be attached and on the default partition (if any).\n\n From your patch:\n\n> + <para>\n> + Similarly, if you have a default partition on the parent table, it is\n> + recommended to create a <literal>CHECK</literal> constraint that excludes\n> + the to be attached partition constraint. Here, too, without the\n> + <literal>CHECK</literal> constraint, this table will be scanned to\n> + validate that the updated default partition constraints while holding\n> + an <literal>ACCESS EXCLUSIVE</literal> lock on the default partition.\n> + </para>\n\nThe AEL is acquired in any case, right ?\n\nI think whatever we say here needs to be crystal clear that only the scan can\nbe skipped.\n\nI suggest that maybe the existing paragraph in alter_table.sgml could maybe say\nthat an exclusive lock is held, maybe like.\n\n|The default partition can't contain any rows that would need to be moved to the\n|new partition, and will be scanned to verify that none are present. This scan,\n|like the scan of the new partition, can be avoided if an appropriate\n|<literal>CHECK</literal> constraint is present.\n|The scan of the default partition occurs while it is exclusively locked.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 15 Apr 2021 14:24:50 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: ATTACH PARTITION locking documentation for DEFAULT partitions" }, { "msg_contents": "On Thu, 15 Apr 2021 at 21:24, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Apr 15, 2021 at 08:47:26PM +0200, Matthias van de Meent wrote:\n> > I recently noticed that ATTACH PARTITION also recursively locks the\n> > default partition with ACCESS EXCLUSIVE mode when its constraints do\n> > not explicitly exclude the to-be-attached partition, which I couldn't\n> > find documented (has been there since PG10 I believe).\n>\n> I'm not sure it's what you're looking for, but maybe you saw:\n> https://www.postgresql.org/docs/12/sql-altertable.html\n> |The default partition can't contain any rows that would need to be moved to the\n> |new partition, and will be scanned to verify that none are present. This scan,\n> |like the scan of the new partition, can be avoided if an appropriate\n> |<literal>CHECK</literal> constraint is present.\n>\n> And since 2a4d96ebb:\n> |Attaching a partition acquires a SHARE UPDATE EXCLUSIVE lock on the parent table, in addition to ACCESS EXCLUSIVE locks on the table to be attached and on the default partition (if any).\n\n From the current documentation the recursive locking isn't clear: I\ndidn't expect an ACCESS EXCLUSIVE on the whole hierarchy of both the\nto-be-attached and the default partitions whilst scanning, because the\nSUEL on the shared parent is not propagated to all its children\neither.\n\n> From your patch:\n>\n> > + <para>\n> > + Similarly, if you have a default partition on the parent table, it is\n> > + recommended to create a <literal>CHECK</literal> constraint that excludes\n> > + the to be attached partition constraint. Here, too, without the\n> > + <literal>CHECK</literal> constraint, this table will be scanned to\n> > + validate that the updated default partition constraints while holding\n> > + an <literal>ACCESS EXCLUSIVE</literal> lock on the default partition.\n> > + </para>\n>\n> The AEL is acquired in any case, right ?\n\nYes, the main point is that the validation scan runs whilst holding\nthe AEL on the partition (sub)tree of that default partition. After\nlooking at bit more at the code, I agree that my current patch is not\ndescriptive enough.\n\nI compared adding a partition to running `CREATE CONSTRAINT ... NOT\nVALID` on the to-be-altered partitions (using AEL), + `VALIDATE\nCONSTRAINT` running recursively over it's partitions (using SHARE\nUPDATE EXCLUSIVE). We only expect an SUEL for VALIDATE CONSTRAINT, and\nthe constraint itself is only added/updated to the direct descendents\nof the parent, not their recursivedescendents. Insertions already can\nonly happen when the whole upward hierarchy of a partition allows for\ninserts, so this shouldn't be that much of an issue.\n\n> I think whatever we say here needs to be crystal clear that only the scan can\n> be skipped.\n\nYes, but when we skip the scan for the default partition, we also skip\nlocking its partition tree with AEL. The partition tree of the table\nthat is being attached, however, is fully locked regardless of\nconstraint definitions.\n\n\n> I suggest that maybe the existing paragraph in alter_table.sgml could maybe say\n> that an exclusive lock is held, maybe like.\n>\n> |The default partition can't contain any rows that would need to be moved to the\n> |new partition, and will be scanned to verify that none are present. This scan,\n> |like the scan of the new partition, can be avoided if an appropriate\n> |<literal>CHECK</literal> constraint is present.\n> |The scan of the default partition occurs while it is exclusively locked.\n\nPFA an updated patch. I've updated the wording of the previous patch,\nand also updated this section in alter_table.sgml, but with different\nwording, explictly explaining the process used to validate the altered\ndefault constraint.\n\n\nThanks for the review.\n\nWith regards,\n\nMatthias van de Meent", "msg_date": "Fri, 16 Apr 2021 14:02:56 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ATTACH PARTITION locking documentation for DEFAULT partitions" }, { "msg_contents": "On Sat, 17 Apr 2021 at 00:03, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> PFA an updated patch. I've updated the wording of the previous patch,\n> and also updated this section in alter_table.sgml, but with different\n> wording, explictly explaining the process used to validate the altered\n> default constraint.\n\nI had to squint at this:\n\n+ALTER TABLE measurement_default ADD CONSTRAINT excl_y2008m02\n+ CHECK ( (logdate &gt;= DATE '2008-02-01' AND logdate &lt; DATE\n'2008-03-01') IS FALSE );\n\nI tried your example and it does not work.\n\nset client_min_messages = 'debug1';\ncreate table rp (dt date not null) partition by range(dt);\ncreate table rp_default partition of rp default;\nalter table rp_default add constraint rp_default_chk check ((dt >=\n'2022-01-01' and dt < '2023-01-01') is false);\ncreate table rp_2022 partition of rp for values from ('2022-01-01') to\n('2023-01-01');\n\nThere's no debug message to indicate that the constraint was used.\n\nLet's try again:\n\nalter table rp_default drop constraint rp_default_chk;\ndrop table rp_2022;\nalter table rp_default add constraint rp_default_chk check (not (dt >=\n'2022-01-01' and dt < '2023-01-01'));\ncreate table rp_2022 partition of rp for values from ('2022-01-01') to\n('2023-01-01');\nDEBUG: updated partition constraint for default partition\n\"rp_default\" is implied by existing constraints\n\nThe debug message indicates that it worked as expected that time.\n\nBut to be honest, I don't know why you've even added that. There's not\neven an example on how to add a DEFAULT partition, so why should we\ninclude an example of how to add a CHECK constraint on one?\n\nI've spent a bit of time hacking at this and I've come up with the\nattached patch.\n\nDavid", "msg_date": "Mon, 5 Jul 2021 01:01:04 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ATTACH PARTITION locking documentation for DEFAULT partitions" }, { "msg_contents": "On Mon, 5 Jul 2021 at 01:01, David Rowley <dgrowleyml@gmail.com> wrote:\n> I've spent a bit of time hacking at this and I've come up with the\n> attached patch.\n\nMatthias, any thoughts on my revised version of the patch?\n\nDavid\n\n\n", "msg_date": "Tue, 13 Jul 2021 00:06:32 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ATTACH PARTITION locking documentation for DEFAULT partitions" }, { "msg_contents": "On Mon, 12 Jul 2021 at 14:06, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Mon, 5 Jul 2021 at 01:01, David Rowley <dgrowleyml@gmail.com> wrote:\n> > I've spent a bit of time hacking at this and I've come up with the\n> > attached patch.\n>\n> Matthias, any thoughts on my revised version of the patch?\n\nSorry for the delay. I think that covers the basics of what I was\nmissing in these docs, and although it does not cover the recursive\n'if the check is implied by constraints don't lock this partition',\nI'd say that your suggested patch is good enough. Thanks for looking\nover this.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Mon, 12 Jul 2021 14:13:50 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ATTACH PARTITION locking documentation for DEFAULT partitions" }, { "msg_contents": "On Tue, 13 Jul 2021 at 00:14, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> Sorry for the delay. I think that covers the basics of what I was\n> missing in these docs, and although it does not cover the recursive\n> 'if the check is implied by constraints don't lock this partition',\n> I'd say that your suggested patch is good enough. Thanks for looking\n> over this.\n\nIsn't that covered the following?\n\n+ <para>\n+ Further locks must also be held on all sub-partitions if the table being\n+ attached is itself a partitioned table. Likewise if the default\n+ partition is itself a partitioned table. The locking of the\n+ sub-partitions can be avoided by adding a <literal>CHECK</literal>\n+ constraint as described in\n+ <xref linkend=\"ddl-partitioning-declarative-maintenance\"/>.\n </para>\n\nDavid\n\n\n", "msg_date": "Tue, 13 Jul 2021 01:27:54 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ATTACH PARTITION locking documentation for DEFAULT partitions" }, { "msg_contents": "On Mon, 12 Jul 2021 at 15:28, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 13 Jul 2021 at 00:14, Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > Sorry for the delay. I think that covers the basics of what I was\n> > missing in these docs, and although it does not cover the recursive\n> > 'if the check is implied by constraints don't lock this partition',\n> > I'd say that your suggested patch is good enough. Thanks for looking\n> > over this.\n>\n> Isn't that covered the following?\n>\n> + <para>\n> + Further locks must also be held on all sub-partitions if the table being\n> + attached is itself a partitioned table. Likewise if the default\n> + partition is itself a partitioned table. The locking of the\n> + sub-partitions can be avoided by adding a <literal>CHECK</literal>\n> + constraint as described in\n> + <xref linkend=\"ddl-partitioning-declarative-maintenance\"/>.\n> </para>\n\nThe exact behaviour is (c.q. QueuePartitionConstraintValidation in\ntablecmds.c:17072), for each partition of this table:\n\n1.) if the existing constraints imply the new constraints: return to .\n2.) lock this partition with ACCESS EXCLUSIVE\n3.) if this is a partitioned table, for each direct child partition,\nexecute this algorithm.\n\nThe algoritm as described in your patch implies that this recursive\nlocking is conditional on _only_ the check-constraints of the topmost\npartition (\"performed whilst holding ... and all of its\nsub-partitions, if any\"), whereas actually the locking on each\n(sub-)partition is determined by the constraints of the hierarchy down\nto that child partition. It in actuality, this should not matter much,\nbut this is a meaningful distinction that I wanted to call out.\n\nRegardless of the distinction between actual locking behaviour and\nthis documentation, we might not want to document this specific\nalgorithm, as this algorithm might be changed in future versions, and\nthe proposed documentation leaves a little wiggleroom in changing the\nlocking behaviour without needing to update the docs.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Mon, 12 Jul 2021 16:30:41 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ATTACH PARTITION locking documentation for DEFAULT partitions" }, { "msg_contents": "On Tue, 13 Jul 2021 at 02:30, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> The algoritm as described in your patch implies that this recursive\n> locking is conditional on _only_ the check-constraints of the topmost\n> partition (\"performed whilst holding ... and all of its\n> sub-partitions, if any\"), whereas actually the locking on each\n> (sub-)partition is determined by the constraints of the hierarchy down\n> to that child partition. It in actuality, this should not matter much,\n> but this is a meaningful distinction that I wanted to call out.\n\nI had in mind that was implied, but maybe it's better to be explicit about that.\n\nI've adjusted the patch and attached what I came up with. Let me know\nwhat you think.\n\nI think this can be back-patched as far as 12. Before then we took an\nAEL on the partitioned table, so it seems much less important since\nany concurrency would be blown out by the AEL.\n\nDavid", "msg_date": "Tue, 27 Jul 2021 18:02:10 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ATTACH PARTITION locking documentation for DEFAULT partitions" }, { "msg_contents": "On Tue, 27 Jul 2021 at 08:02, David Rowley <dgrowleyml@gmail.com> wrote:\\>\n> On Tue, 13 Jul 2021 at 02:30, Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > The algoritm as described in your patch implies that this recursive\n> > locking is conditional on _only_ the check-constraints of the topmost\n> > partition (\"performed whilst holding ... and all of its\n> > sub-partitions, if any\"), whereas actually the locking on each\n> > (sub-)partition is determined by the constraints of the hierarchy down\n> > to that child partition. It in actuality, this should not matter much,\n> > but this is a meaningful distinction that I wanted to call out.\n>\n> I had in mind that was implied, but maybe it's better to be explicit about that.\n>\n> I've adjusted the patch and attached what I came up with. Let me know\n> what you think.\n\nI like this improved wording. Thanks!\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Tue, 27 Jul 2021 11:35:53 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ATTACH PARTITION locking documentation for DEFAULT partitions" }, { "msg_contents": "On Tue, 27 Jul 2021 at 21:36, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Tue, 27 Jul 2021 at 08:02, David Rowley <dgrowleyml@gmail.com> wrote:\\>\n> > I've adjusted the patch and attached what I came up with. Let me know\n> > what you think.\n>\n> I like this improved wording. Thanks!\n\nI've pushed this with some very minor further wording adjustments.\n\nThanks for working on this.\n\nDavid\n\n\n", "msg_date": "Wed, 28 Jul 2021 15:04:31 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ATTACH PARTITION locking documentation for DEFAULT partitions" } ]
[ { "msg_contents": "[ moving this to a new thread so as not to confuse the cfbot ]\n\nI wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> Is there anything else we should be doing along the eat your own dogfood\n>> line that don't have these security implications?\n\n> We can still convert the initdb-created SQL functions to new style,\n> since there's no security threat during initdb. I'll make a patch\n> for that soon.\n\nHere's a draft patch that converts all the built-in and information_schema\nSQL functions to new style, except for half a dozen that cannot be\nconverted because they use polymorphic arguments.\n\nLeaving that remaining half-a-dozen as old style seems okay from a\nsecurity standpoint, because they are few enough and simple enough\nthat it's no big notational headache to make their source text 100%\nsearch-path-proof. I've inserted OPERATOR() notation where necessary\nto make them bulletproof.\n\nAlso worth a comment perhaps is that for the functions that are being\nconverted, I replaced the prosrc text in pg_proc.dat with \"see\nsystem_views.sql\". I think this might reduce confusion by making\nit clear that these are not the operative definitions.\n\nOne thing this patch does that's not strictly within the charter\nis to give the two forms of ts_debug() pg_proc.dat entries, just\nso they are more like their new neighbors. This means they'll be\npinned where before they were not, but that seems desirable to me.\n\nI'm pretty confident the conversion is accurate, because I used \\sf\nto generate the text for the replacement definitions. So I think\nthis is committable, though review is welcome.\n\nOne thing I was wondering about, but did not pull the trigger on\nhere, is whether to split off the function-related stuff in\nsystem_views.sql into a new file \"system_functions.sql\", as has\nlong been speculated about by the comments in system_views.sql.\nI think it is time to do this because\n\n(a) The function stuff now amounts to a full third of the file.\n\n(b) While the views made by system_views.sql are intentionally\nnot pinned, the function-related commands are messing with\npre-existing objects that *are* pinned. This seems quite\nconfusing to me, and it might interfere with the intention that\nyou could reload the system view definitions using this file.\n\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 15 Apr 2021 19:25:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Converting built-in SQL functions to new style" }, { "msg_contents": "On Thu, Apr 15, 2021 at 07:25:39PM -0400, Tom Lane wrote:\n> One thing I was wondering about, but did not pull the trigger on\n> here, is whether to split off the function-related stuff in\n> system_views.sql into a new file \"system_functions.sql\"\n\n+1\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 15 Apr 2021 21:07:38 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Converting built-in SQL functions to new style" }, { "msg_contents": "On Thu, Apr 15, 2021 at 07:25:39PM -0400, Tom Lane wrote:\n> Here's a draft patch that converts all the built-in and information_schema\n> SQL functions to new style, except for half a dozen that cannot be\n> converted because they use polymorphic arguments.\n\nThis patch looks good.\n\n> One thing I was wondering about, but did not pull the trigger on\n> here, is whether to split off the function-related stuff in\n> system_views.sql into a new file \"system_functions.sql\", as has\n> long been speculated about by the comments in system_views.sql.\n> I think it is time to do this because\n> \n> (a) The function stuff now amounts to a full third of the file.\n\nFair.\n\n> (b) While the views made by system_views.sql are intentionally\n> not pinned, the function-related commands are messing with\n> pre-existing objects that *are* pinned. This seems quite\n> confusing to me, and it might interfere with the intention that\n> you could reload the system view definitions using this file.\n\nI'm not aware of that causing a problem. Currently, the views give a few\nerrors, and the functions do not.\n\n\n", "msg_date": "Fri, 16 Apr 2021 01:30:58 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Converting built-in SQL functions to new style" } ]
[ { "msg_contents": "Dear all\n\nSince I was receiving an error when defining a set returning function, I\nborrowed a function from PostgreSQL as follows\n\n/* C definition */\ntypedef struct testState\n{\n int current;\n int finish;\n int step;\n} testState;\n\n/**\n* test_srf(startval int, endval int, step int)\n*/\nPG_FUNCTION_INFO_V1(test_srf);\nDatum test_srf(PG_FUNCTION_ARGS)\n{\n FuncCallContext *funcctx;\n testState *fctx;\n int result; /* the actual return value */\n\n if (SRF_IS_FIRSTCALL())\n {\n /* Get input values */\n int start = PG_GETARG_INT32(0);\n int finish = PG_GETARG_INT32(1);\n int step = PG_GETARG_INT32(2);\n MemoryContext oldcontext;\n\n /* create a function context for cross-call persistence */\n funcctx = SRF_FIRSTCALL_INIT();\n\n /* switch to memory context appropriate for multiple function calls */\n oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);\n\n /* quick opt-out if we get nonsensical inputs */\n if (step <= 0 || start == finish)\n {\n funcctx = SRF_PERCALL_SETUP();\n SRF_RETURN_DONE(funcctx);\n }\n\n /* allocate memory for function context */\n fctx = (testState *) palloc0(sizeof(testState));\n fctx->current = start;\n fctx->finish = finish;\n fctx->step = step;\n\n funcctx->user_fctx = fctx;\n MemoryContextSwitchTo(oldcontext);\n }\n\n /* stuff done on every call of the function */\n funcctx = SRF_PERCALL_SETUP();\n\n /* get state */\n fctx = funcctx->user_fctx;\n\n result = fctx->current;\n fctx->current += fctx->step;\n /* Stop when we have generated all values */\n if (fctx->current > fctx->finish)\n {\n SRF_RETURN_DONE(funcctx);\n }\n\n SRF_RETURN_NEXT(funcctx, Int32GetDatum(result));\n}\n\n/* SQL definition */\nCREATE OR REPLACE FUNCTION testSRF(startval int, endval int, step int)\n RETURNS SETOF integer\n AS 'MODULE_PATHNAME', 'test_srf'\n LANGUAGE C IMMUTABLE STRICT PARALLEL SAFE;\n\nWhen I execute this function I obtain\n\nselect testSRF(1,10, 2);\nERROR: unrecognized table-function returnMode: 257\n\nselect version();\n PostgreSQL 13.2 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu\n9.3.0-17ubuntu1~20.04) 9.3.0, 64-bit\n\nAny idea what could be wrong ?\n\nThanks for your help\n\nEsteban\n\nDear allSince I was receiving an error when defining a set returning function, I borrowed a function from PostgreSQL as follows/* C definition */typedef struct testState{  int current;  int finish;  int step;} testState;/*** test_srf(startval int, endval int, step int)*/PG_FUNCTION_INFO_V1(test_srf);Datum test_srf(PG_FUNCTION_ARGS){  FuncCallContext *funcctx;  testState *fctx;  int result; /* the actual return value */  if (SRF_IS_FIRSTCALL())  {    /* Get input values */    int start = PG_GETARG_INT32(0);    int finish = PG_GETARG_INT32(1);    int step = PG_GETARG_INT32(2);    MemoryContext oldcontext;        /* create a function context for cross-call persistence */    funcctx = SRF_FIRSTCALL_INIT();    /* switch to memory context appropriate for multiple function calls */    oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);    /* quick opt-out if we get nonsensical inputs  */    if (step <= 0 || start == finish)    {      funcctx = SRF_PERCALL_SETUP();      SRF_RETURN_DONE(funcctx);    }    /* allocate memory for function context */    fctx = (testState *) palloc0(sizeof(testState));    fctx->current = start;    fctx->finish = finish;    fctx->step = step;        funcctx->user_fctx = fctx;    MemoryContextSwitchTo(oldcontext);  }  /* stuff done on every call of the function */  funcctx = SRF_PERCALL_SETUP();  /* get state */  fctx = funcctx->user_fctx;  result = fctx->current;  fctx->current += fctx->step;  /* Stop when we have generated all values */  if (fctx->current > fctx->finish)  {    SRF_RETURN_DONE(funcctx);  }  SRF_RETURN_NEXT(funcctx, Int32GetDatum(result));}/* SQL definition */CREATE OR REPLACE FUNCTION testSRF(startval int, endval int, step int)  RETURNS SETOF integer  AS 'MODULE_PATHNAME', 'test_srf'  LANGUAGE C IMMUTABLE STRICT PARALLEL SAFE;When I execute this function I obtainselect testSRF(1,10, 2);ERROR:  unrecognized table-function returnMode: 257select version(); PostgreSQL 13.2 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, 64-bitAny idea what could be wrong ?Thanks for your helpEsteban", "msg_date": "Fri, 16 Apr 2021 15:34:32 +0200", "msg_from": "Esteban Zimanyi <ezimanyi@ulb.ac.be>", "msg_from_op": true, "msg_subject": "Error when defining a set returning function" }, { "msg_contents": "Esteban Zimanyi <ezimanyi@ulb.ac.be> writes:\n> Since I was receiving an error when defining a set returning function, I\n> borrowed a function from PostgreSQL as follows\n> ...\n> When I execute this function I obtain\n\n> select testSRF(1,10, 2);\n> ERROR: unrecognized table-function returnMode: 257\n\nHmm, I compiled this function up and it works for me:\n\nregression=# select testSRF(1,10, 2);\n testsrf \n----------\n 1\n 3\n 5\n 7\n(4 rows)\n\nI think your \"quick opt-out\" code is a bit broken, because it fails to\nrestore the current memory context; but there's nothing wrong with the\nmain code path.\n\nHence, the problem is somewhere else. The first theory that comes\nto mind is that you're compiling against Postgres headers that\ndon't match the server version you're actually loading the code\ninto. In theory the PG_MODULE_MAGIC infrastructure ought to catch\nthat, but maybe you've found some creative way to fool that :-(.\nOne way maybe would be if the headers were from some pre-release\nv13 version that wasn't ABI-compatible with 13.0.\n\nOr it could be something else, but I'd counsel looking for build\nprocess mistakes, cause this C code isn't the problem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Apr 2021 10:29:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error when defining a set returning function" }, { "msg_contents": "Dear Tom\n\nMany thanks for asking my question so quickly. After your answer, I\ndownloaded brand new versions of PostgreSQL 13.2, PostGIS 2.5.5, and\ncompiled/installed with the standard parameters. I didn't get any error\nmessages in the build. I then recompiled again MobilityDB and got the same\nerror message.\n\nWhen debugging the function with gdb, I noticed that the rsinfo variable of\nthe PostgreSQL function ExecMakeFunctionResultSet is modified in the\nmacro SRF_RETURN_NEXT causing the problem. Any idea how to solve this?\n\n4353 SRF_RETURN_NEXT(funcctx, Int32GetDatum(result));\n(gdb) up\n#1 0x000055b8a871fc56 in ExecMakeFunctionResultSet (fcache=0x55b8a8e6d9a0,\necontext=0x55b8a8e6cfa0,\n argContext=0x55b8a9d00dd0, isNull=0x55b8a8e6d930, isDone=0x55b8a8e6d988)\n at\n/home/esteban/src/postgresql-13.2/build_dir/../src/backend/executor/execSRF.c:614\n614 result = FunctionCallInvoke(fcinfo);\n(gdb) p rsinfo\n$5 = {type = T_ReturnSetInfo, econtext = 0x55b8a8e6cfa0, expectedDesc =\n0x55b8a8e6e8f0, allowedModes = 3,\n returnMode = SFRM_ValuePerCall, isDone = ExprSingleResult, setResult =\n0x0, setDesc = 0x0}\n(gdb) n\n4354 }\n(gdb)\nExecMakeFunctionResultSet (fcache=0x55b8a8e6d9a0, econtext=0x55b8a8e6cfa0,\nargContext=0x55b8a9d00dd0,\n isNull=0x55b8a8e6d930, isDone=0x55b8a8e6d988)\n at\n/home/esteban/src/postgresql-13.2/build_dir/../src/backend/executor/execSRF.c:615\n615 *isNull = fcinfo->isnull;\n(gdb) p rsinfo\n$6 = {type = T_ReturnSetInfo, econtext = 0x55b8a8e6cfa0, expectedDesc =\n0x55b8a8e6e8f0, allowedModes = 3,\n returnMode = (SFRM_ValuePerCall | unknown: 256), isDone =\nExprSingleResult, setResult = 0x0, setDesc = 0x0}\n(gdb)\n\nDear TomMany thanks for asking my question so quickly. After your answer, I downloaded brand new versions of PostgreSQL 13.2, PostGIS 2.5.5, and compiled/installed with the standard parameters. I didn't get any error messages in the build. I then recompiled again MobilityDB and got the same error message.When debugging the function with gdb, I noticed that the rsinfo variable of the PostgreSQL function \n\nExecMakeFunctionResultSet  is modified in the macro \n\nSRF_RETURN_NEXT causing the problem. Any idea how to solve this?4353      SRF_RETURN_NEXT(funcctx, Int32GetDatum(result));(gdb) up#1  0x000055b8a871fc56 in ExecMakeFunctionResultSet (fcache=0x55b8a8e6d9a0, econtext=0x55b8a8e6cfa0,    argContext=0x55b8a9d00dd0, isNull=0x55b8a8e6d930, isDone=0x55b8a8e6d988)    at /home/esteban/src/postgresql-13.2/build_dir/../src/backend/executor/execSRF.c:614614                     result = FunctionCallInvoke(fcinfo);(gdb) p rsinfo$5 = {type = T_ReturnSetInfo, econtext = 0x55b8a8e6cfa0, expectedDesc = 0x55b8a8e6e8f0, allowedModes = 3,  returnMode = SFRM_ValuePerCall, isDone = ExprSingleResult, setResult = 0x0, setDesc = 0x0}(gdb) n4354    }(gdb)ExecMakeFunctionResultSet (fcache=0x55b8a8e6d9a0, econtext=0x55b8a8e6cfa0, argContext=0x55b8a9d00dd0,    isNull=0x55b8a8e6d930, isDone=0x55b8a8e6d988)    at /home/esteban/src/postgresql-13.2/build_dir/../src/backend/executor/execSRF.c:615615                     *isNull = fcinfo->isnull;(gdb) p rsinfo$6 = {type = T_ReturnSetInfo, econtext = 0x55b8a8e6cfa0, expectedDesc = 0x55b8a8e6e8f0, allowedModes = 3,  returnMode = (SFRM_ValuePerCall | unknown: 256), isDone = ExprSingleResult, setResult = 0x0, setDesc = 0x0}(gdb)", "msg_date": "Fri, 16 Apr 2021 18:33:46 +0200", "msg_from": "Esteban Zimanyi <ezimanyi@ulb.ac.be>", "msg_from_op": true, "msg_subject": "Re: Error when defining a set returning function" }, { "msg_contents": "Esteban Zimanyi <ezimanyi@ulb.ac.be> writes:\n> When debugging the function with gdb, I noticed that the rsinfo variable of\n> the PostgreSQL function ExecMakeFunctionResultSet is modified in the\n> macro SRF_RETURN_NEXT causing the problem. Any idea how to solve this?\n\nWell, what SRF_RETURN_NEXT thinks it's doing is\n\n\t\trsi->isDone = ExprMultipleResult; \\\n\nwhich surely shouldn't change the returnMode field. At this point\nI'm guessing that you are compiling the PG headers with some compiler\npragma that changes the struct packing rules. Don't do that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Apr 2021 13:04:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error when defining a set returning function" }, { "msg_contents": "Many thanks Tom for your help !\n\nI removed the flag -fshort-enums and everything works fine !\n\nOn Fri, Apr 16, 2021 at 7:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Esteban Zimanyi <ezimanyi@ulb.ac.be> writes:\n> > When debugging the function with gdb, I noticed that the rsinfo variable\n> of\n> > the PostgreSQL function ExecMakeFunctionResultSet is modified in the\n> > macro SRF_RETURN_NEXT causing the problem. Any idea how to solve this?\n>\n> Well, what SRF_RETURN_NEXT thinks it's doing is\n>\n> rsi->isDone = ExprMultipleResult; \\\n>\n> which surely shouldn't change the returnMode field. At this point\n> I'm guessing that you are compiling the PG headers with some compiler\n> pragma that changes the struct packing rules. Don't do that.\n>\n> regards, tom lane\n>\n\nMany thanks Tom for your help ! I removed the flag -fshort-enums and everything works fine !On Fri, Apr 16, 2021 at 7:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Esteban Zimanyi <ezimanyi@ulb.ac.be> writes:\n> When debugging the function with gdb, I noticed that the rsinfo variable of\n> the PostgreSQL function ExecMakeFunctionResultSet  is modified in the\n> macro  SRF_RETURN_NEXT causing the problem. Any idea how to solve this?\n\nWell, what SRF_RETURN_NEXT thinks it's doing is\n\n                rsi->isDone = ExprMultipleResult; \\\n\nwhich surely shouldn't change the returnMode field.  At this point\nI'm guessing that you are compiling the PG headers with some compiler\npragma that changes the struct packing rules.  Don't do that.\n\n                        regards, tom lane", "msg_date": "Fri, 16 Apr 2021 21:32:22 +0200", "msg_from": "Esteban Zimanyi <ezimanyi@ulb.ac.be>", "msg_from_op": true, "msg_subject": "Re: Error when defining a set returning function" }, { "msg_contents": "\nOn 4/16/21 3:32 PM, Esteban Zimanyi wrote:\n> Many thanks Tom for your help ! \n>\n> I removed the flag -fshort-enums and everything works fine !\n>\n>\n\nIf you build with pgxs it should supply the appropriate compiler flags.\nAlternatively, get the right settings from pg_config. In general rolling\nyour own is a bad idea.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 16 Apr 2021 16:46:54 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error when defining a set returning function" }, { "msg_contents": "> If you build with pgxs it should supply the appropriate compiler flags.\n> Alternatively, get the right settings from pg_config. In general rolling\n> your own is a bad idea.\n>\n\nI didn't know about pgxs. Many thanks Andrew for pointing this out.\n\nIf you build with pgxs it should supply the appropriate compiler flags.\nAlternatively, get the right settings from pg_config. In general rolling\nyour own is a bad idea. I didn't know about pgxs. Many thanks Andrew for pointing this out.", "msg_date": "Sat, 17 Apr 2021 13:12:24 +0200", "msg_from": "Esteban Zimanyi <ezimanyi@ulb.ac.be>", "msg_from_op": true, "msg_subject": "Re: Error when defining a set returning function" } ]
[ { "msg_contents": "commit 87259588d0ab0b8e742e30596afa7ae25caadb18\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\nDate: Thu Apr 25 10:20:23 2019 -0400\n\n Fix tablespace inheritance for partitioned rels\n\nThis doc change doesn't make sense to me:\n\n+++ b/doc/src/sgml/config.sgml\n@@ -7356,7 +7356,8 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;\n <para>\n This variable specifies the default tablespace in which to create\n objects (tables and indexes) when a <command>CREATE</command> command does\n- not explicitly specify a tablespace.\n+ not explicitly specify a tablespace. It also determines the tablespace\n+ that a partitioned relation will direct future partitions to.\n </para>\n\ndefault_tablespace is a global GUC, so if a partitioned relation \"directs\"\npartitions anywhere, it's not to the fallback value of the GUC, but to its\nreltablespace, as this patch wrote in doc/src/sgml/ref/create_table.sgml:\n\n+ the tablespace specified overrides <literal>default_tablespace</literal>\n+ as the default tablespace to use for any newly created partitions when no\n+ other tablespace is explicitly specified.\n\nMaybe I'm misreading config.sgml somehow ?\nI thought it would be more like this (but actually I think <default_tablespace>\nshouldn't say anything at all):\n\n+ ... It also determines the tablespace where new partitions are created,\n+ if the parent, partitioned relation doesn't have a tablespace set.\n\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 16 Apr 2021 09:31:35 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "default_tablespace doc and partitioned rels" }, { "msg_contents": "On 2021-Apr-16, Justin Pryzby wrote:\n\n> +++ b/doc/src/sgml/config.sgml\n> @@ -7356,7 +7356,8 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;\n> <para>\n> This variable specifies the default tablespace in which to create\n> objects (tables and indexes) when a <command>CREATE</command> command does\n> - not explicitly specify a tablespace.\n> + not explicitly specify a tablespace. It also determines the tablespace\n> + that a partitioned relation will direct future partitions to.\n> </para>\n> \n> default_tablespace is a global GUC, so if a partitioned relation \"directs\"\n> partitions anywhere, it's not to the fallback value of the GUC, but to its\n> reltablespace, as this patch wrote in doc/src/sgml/ref/create_table.sgml:\n\nYes, but also the partitioned table's reltablespace is going to be set\nto default_tablespace, if no tablespace is explicitly specified in the\npartitioned table creation.\n\nA partitioned table is not created anywhere itself; the only thing it\ncan do, is direct where are future partitions created. I don't think\nit's 100% obvious that default_tablespace will become the partitioned\ntable's tablespace, which is why I added that phrase. I understand that\nthe language might be unclear, but I don't think either of your\nsuggestions make this any clearer. Removing it just hides the behavior,\nand this one:\n\n> + ... It also determines the tablespace where new partitions are created,\n> + if the parent, partitioned relation doesn't have a tablespace set.\n\njust documents that default_tablespace will be in effect at partition\nCREATE time, but it fails to remind the user that the partitioned table\nwill acquire default_tablespace as its own tablespace.\n\nMaybe we can reword it in some other way. \"If this parameter is set\nwhen a partitioned table is created, it will become the default\ntablespace for future partitions too, even if default_tablespace itself\nis reset later\" ...??\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"En las profundidades de nuestro inconsciente hay una obsesiva necesidad\nde un universo l�gico y coherente. Pero el universo real se halla siempre\nun paso m�s all� de la l�gica\" (Irulan)\n\n\n", "msg_date": "Fri, 16 Apr 2021 16:19:18 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: default_tablespace doc and partitioned rels" }, { "msg_contents": "On Fri, Apr 16, 2021 at 04:19:18PM -0400, Alvaro Herrera wrote:\n> Maybe we can reword it in some other way. \"If this parameter is set\n> when a partitioned table is created, it will become the default\n> tablespace for future partitions too, even if default_tablespace itself\n> is reset later\" ...??\n\n+1\n\nI'd say:\n\nIf this parameter is set when a partitioned table is created, the partitioned\ntable's tablespace will be set to the given tablespace, and which will be the\ndefault tablespace for partitions create in the future, even if\ndefault_tablespace itself has since been changed.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 16 Apr 2021 16:02:01 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: default_tablespace doc and partitioned rels" }, { "msg_contents": "On 2021-Apr-16, Justin Pryzby wrote:\n\n> If this parameter is set when a partitioned table is created, the partitioned\n> table's tablespace will be set to the given tablespace, and which will be the\n> default tablespace for partitions create in the future, even if\n> default_tablespace itself has since been changed.\n\nPushed with very similar wording:\n\n+ <para>\n+ If this parameter is set to a value other than the empty string\n+ when a partitioned table is created, the partitioned table's\n+ tablespace will be set to that value, which will be used as\n+ the default tablespace for partitions created in the future,\n+ even if <varname>default_tablespace</varname> has changed since then.\n+ </para>\n\nI made it a separate paragraph at the end, because I noticed that I had\nadded the note in an inappropriate place in the earlier commit; the\nsecond paragraph in particular is more general than this one. Also\nlooking at that one I realized that we need to talk about the value\nbeing \"not the empty string\".\n\nI hope it's clear enough now, but if you or anybody have further\nsuggestion on improving this, I'm listening.\n\nThanks\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"If you have nothing to say, maybe you need just the right tool to help you\nnot say it.\" (New York Times, about Microsoft PowerPoint)\n\n\n", "msg_date": "Thu, 29 Apr 2021 11:39:11 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: default_tablespace doc and partitioned rels" } ]
[ { "msg_contents": "Hello,\n\nthis is a feature request for a rather simple functionality.\n\nI propose to implement a builtin and efficient bidirectional cast between\nctid and bigint types.\n\nAnother nice feature would be a function that can be called from a sql\nstatement and would throw an exception when executed.\n\nI know these functions can be implemented using UDF, but the performance\nand need to deploy it to every database is very inconvenient.\n\nThank you\n\nHello,this is a feature request for a rather simple functionality.I propose to implement a builtin and efficient bidirectional cast between ctid and bigint types.Another nice feature would be a function that can be called from a sql statement and would throw an exception when executed.I know these functions can be implemented using UDF, but the performance and need to deploy it to every database is very inconvenient.Thank you", "msg_date": "Fri, 16 Apr 2021 18:54:31 +0200", "msg_from": "=?UTF-8?Q?Vladim=C3=ADr_Houba_ml=2E?= <v.houba@gmail.com>", "msg_from_op": true, "msg_subject": "feature request ctid cast / sql exception" }, { "msg_contents": "On Sat, Apr 17, 2021 at 10:58 AM Vladimír Houba ml. <v.houba@gmail.com>\nwrote:\n\n> I propose to implement a builtin and efficient bidirectional cast between\n> ctid and bigint types.\n>\n>\nWhy?\n\n\n\n> Another nice feature would be a function that can be called from a sql\n> statement and would throw an exception when executed.\n>\n>\nAn assertion-related extension in core would be welcomed.\n\nDavid J.\n\nOn Sat, Apr 17, 2021 at 10:58 AM Vladimír Houba ml. <v.houba@gmail.com> wrote:I propose to implement a builtin and efficient bidirectional cast between ctid and bigint types.Why? Another nice feature would be a function that can be called from a sql statement and would throw an exception when executed.An assertion-related extension in core would be welcomed.David J.", "msg_date": "Sat, 17 Apr 2021 12:24:57 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: feature request ctid cast / sql exception" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Sat, Apr 17, 2021 at 10:58 AM Vladimír Houba ml. <v.houba@gmail.com>\n> wrote:\n>> Another nice feature would be a function that can be called from a sql\n>> statement and would throw an exception when executed.\n\n> An assertion-related extension in core would be welcomed.\n\nThis has been suggested before, but as soon as you start looking\nat the details you find that it's really hard to get a one-size-fits-all\ndefinition that's any simpler than the existing plpgsql RAISE\nfunctionality. Different people have different ideas about how\nmuch decoration they want around the message. So, if 10% of the\nworld agrees with your choices and the other 90% keeps on using\na custom plpgsql function to do it their way, you haven't really\nimproved matters much. OTOH a 90% solution might be interesting to\nincorporate in core, but nobody's demonstrated that one exists.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 17 Apr 2021 15:46:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: feature request ctid cast / sql exception" }, { "msg_contents": "I use ctid as a row identifier within a transaction in a Java application.\nTo obtain the row ctid I either have to\n\n - cast it to text and store it as String\n - cast it to string, then convert it to a bigint using UDF which is\n inefficient\n\nI wish I could just cast ctid to bigint and store it as a primitive long\ntype.\n\nRegarding the exception throwing function it makes good sense for example\nin case blocks when you encouter unexpected value.\nIMHO \"one fits all\" solution may be making a raise function with the same\nsyntax as raise statement in plpgsql.\n\nRAISE([ level ] 'format' [, expression [, ... ]] [ USING option =\nexpression [, ... ] ])\nRAISE([ level ] condition_name [ USING option = expression [, ... ] ])\nRAISE([ level ] SQLSTATE 'sqlstate' [ USING option = expression [, ... ] ])\nRAISE([ level ] USING option = expression [, ... ])\nRAISE()\n\n\nOn Sat, Apr 17, 2021 at 9:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > On Sat, Apr 17, 2021 at 10:58 AM Vladimír Houba ml. <v.houba@gmail.com>\n> > wrote:\n> >> Another nice feature would be a function that can be called from a sql\n> >> statement and would throw an exception when executed.\n>\n> > An assertion-related extension in core would be welcomed.\n>\n> This has been suggested before, but as soon as you start looking\n> at the details you find that it's really hard to get a one-size-fits-all\n> definition that's any simpler than the existing plpgsql RAISE\n> functionality. Different people have different ideas about how\n> much decoration they want around the message. So, if 10% of the\n> world agrees with your choices and the other 90% keeps on using\n> a custom plpgsql function to do it their way, you haven't really\n> improved matters much. OTOH a 90% solution might be interesting to\n> incorporate in core, but nobody's demonstrated that one exists.\n>\n> regards, tom lane\n>\n\n\n-- \nS pozdravom\nVladimír Houba ml.\n\nI use ctid as a row identifier within a transaction in a Java application.To obtain the row ctid I either have tocast it to text and store it as Stringcast it to string, then convert it to a bigint using UDF which is inefficientI wish I could just cast ctid to bigint and store it as a primitive long type.Regarding the exception throwing function it makes good sense for example in case blocks when you encouter unexpected value.IMHO \"one fits all\" solution may be making a raise function with the same syntax as raise statement in plpgsql.RAISE([ level ] 'format' [, expression [, ... ]] [ USING option = expression [, ... ] ])RAISE([ level ] condition_name [ USING option = expression [, ... ] ])RAISE([ level ] SQLSTATE 'sqlstate' [ USING option = expression [, ... ] ])RAISE([ level ] USING option = expression [, ... ])RAISE()On Sat, Apr 17, 2021 at 9:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Sat, Apr 17, 2021 at 10:58 AM Vladimír Houba ml. <v.houba@gmail.com>\n> wrote:\n>> Another nice feature would be a function that can be called from a sql\n>> statement and would throw an exception when executed.\n\n> An assertion-related extension in core would be welcomed.\n\nThis has been suggested before, but as soon as you start looking\nat the details you find that it's really hard to get a one-size-fits-all\ndefinition that's any simpler than the existing plpgsql RAISE\nfunctionality.  Different people have different ideas about how\nmuch decoration they want around the message.  So, if 10% of the\nworld agrees with your choices and the other 90% keeps on using\na custom plpgsql function to do it their way, you haven't really\nimproved matters much.  OTOH a 90% solution might be interesting to\nincorporate in core, but nobody's demonstrated that one exists.\n\n                        regards, tom lane\n-- S pozdravomVladimír Houba ml.", "msg_date": "Sat, 17 Apr 2021 21:58:08 +0200", "msg_from": "=?UTF-8?Q?Vladim=C3=ADr_Houba_ml=2E?= <v.houba@gmail.com>", "msg_from_op": true, "msg_subject": "Re: feature request ctid cast / sql exception" }, { "msg_contents": "On Sat, Apr 17, 2021 at 12:58 PM Vladimír Houba ml. <v.houba@gmail.com>\nwrote:\n\n> I use ctid as a row identifier within a transaction in a Java application.\n>\n\nThis doesn't present a very compelling argument since an actual user\ndeclared primary key is what is expected to be used as a row identifier.\nAnd as those are typically bigint if you follow this norm you get exactly\nwhat you say you need.\n\nDavid J.\n\nOn Sat, Apr 17, 2021 at 12:58 PM Vladimír Houba ml. <v.houba@gmail.com> wrote:I use ctid as a row identifier within a transaction in a Java application.This doesn't present a very compelling argument since an actual user declared primary key is what is expected to be used as a row identifier.  And as those are typically bigint if you follow this norm you get exactly what you say you need.David J.", "msg_date": "Sat, 17 Apr 2021 14:05:30 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: feature request ctid cast / sql exception" }, { "msg_contents": "On Saturday, April 17, 2021, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > On Sat, Apr 17, 2021 at 10:58 AM Vladimír Houba ml. <v.houba@gmail.com>\n> > wrote:\n> >> Another nice feature would be a function that can be called from a sql\n> >> statement and would throw an exception when executed.\n>\n> > An assertion-related extension in core would be welcomed.\n>\n> This has been suggested before, but as soon as you start looking\n> at the details you find that it's really hard to get a one-size-fits-all\n> definition that's any simpler than the existing plpgsql RAISE\n> functionality.\n>\n\nEven just getting raise functionality as a standard functional api would be\na win. I don’t imagine enough users would care enough to write their own\nroutines if one already existed, even if they would argue details about how\nto create it in the first place. For the expected use case of basically\ndeveloper-oriented error messages there is generally a acceptance of taking\nthe sufficient solution.\n\nDavid J.\n\nOn Saturday, April 17, 2021, Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Sat, Apr 17, 2021 at 10:58 AM Vladimír Houba ml. <v.houba@gmail.com>\n> wrote:\n>> Another nice feature would be a function that can be called from a sql\n>> statement and would throw an exception when executed.\n\n> An assertion-related extension in core would be welcomed.\n\nThis has been suggested before, but as soon as you start looking\nat the details you find that it's really hard to get a one-size-fits-all\ndefinition that's any simpler than the existing plpgsql RAISE\nfunctionality.\nEven just getting raise functionality as a standard functional api would be a win.  I don’t imagine enough users would care enough to write their own routines if one already existed, even if they would argue details about how to create it in the first place.  For the expected use case of basically developer-oriented error messages there is generally a acceptance of taking the sufficient solution.David J.", "msg_date": "Sat, 17 Apr 2021 14:48:41 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: feature request ctid cast / sql exception" }, { "msg_contents": "This is a specific use case, I have a big table without a pk. Updates with\nctid are blazing fast even without an index. I dont need it.\n\nThe argument behind this is that users excpect this functionality, its not\njust me. Search stackoverflow. They end up using various suboptimal\nsolutions as I described earlier. This is a very very simple functionality\nso please consider it. Im also writing an opensource lib that would make\nuse of this. My users will be thankfull to you.\n\nOn Sat, Apr 17, 2021, 23:05 David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Sat, Apr 17, 2021 at 12:58 PM Vladimír Houba ml. <v.houba@gmail.com>\n> wrote:\n>\n>> I use ctid as a row identifier within a transaction in a Java application.\n>>\n>\n> This doesn't present a very compelling argument since an actual user\n> declared primary key is what is expected to be used as a row identifier.\n> And as those are typically bigint if you follow this norm you get exactly\n> what you say you need.\n>\n> David J.\n>\n>\n\nThis is a specific use case, I have a big table without a pk. Updates with ctid are blazing fast even without an index. I dont need it.The argument behind this is that users excpect this functionality, its not just me. Search stackoverflow. They end up using various suboptimal solutions as I described earlier. This is a very very simple functionality so please consider it. Im also writing an opensource lib that would make use of this. My users will be thankfull to you.On Sat, Apr 17, 2021, 23:05 David G. Johnston <david.g.johnston@gmail.com> wrote:On Sat, Apr 17, 2021 at 12:58 PM Vladimír Houba ml. <v.houba@gmail.com> wrote:I use ctid as a row identifier within a transaction in a Java application.This doesn't present a very compelling argument since an actual user declared primary key is what is expected to be used as a row identifier.  And as those are typically bigint if you follow this norm you get exactly what you say you need.David J.", "msg_date": "Sun, 18 Apr 2021 08:50:37 +0200", "msg_from": "=?UTF-8?Q?Vladim=C3=ADr_Houba_ml=2E?= <v.houba@gmail.com>", "msg_from_op": true, "msg_subject": "Re: feature request ctid cast / sql exception" } ]
[ { "msg_contents": "\nHi,\n\nPeter Geoghegan suggested that I have the cross version upgrade checker\nrun pg_amcheck on the upgraded module. This seemed to me like a good\nidea, so I tried it, only to find that it refuses to run unless the\namcheck extension is installed. That's fair enough, but it also seems to\nme like we should have an option to have pg_amcheck install the\nextension is it's not present, by running something like 'create\nextension if not exists amcheck'. Maybe in such a case there could also\nbe an option to drop the extension when pg_amcheck's work is done - I\nhaven't thought through all the implications.\n\nGiven pg_amcheck is a new piece of work I'm not sure if we can sneak\nthis in under the wire for release 14. I will certainly undertake to\nreview anything expeditiously. I can work around this issue in the\nbuildfarm, but it seems like something other users are likely to want.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 16 Apr 2021 14:06:08 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "pg_amcheck option to install extension" }, { "msg_contents": "> On Apr 16, 2021, at 11:06 AM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> \n> Hi,\n> \n> Peter Geoghegan suggested that I have the cross version upgrade checker\n> run pg_amcheck on the upgraded module. This seemed to me like a good\n> idea, so I tried it, only to find that it refuses to run unless the\n> amcheck extension is installed. That's fair enough, but it also seems to\n> me like we should have an option to have pg_amcheck install the\n> extension is it's not present, by running something like 'create\n> extension if not exists amcheck'. Maybe in such a case there could also\n> be an option to drop the extension when pg_amcheck's work is done - I\n> haven't thought through all the implications.\n> \n> Given pg_amcheck is a new piece of work I'm not sure if we can sneak\n> this in under the wire for release 14. I will certainly undertake to\n> review anything expeditiously. I can work around this issue in the\n> buildfarm, but it seems like something other users are likely to want.\n\nWe cannot quite use a \"create extension if not exists amcheck\" command, as we clear the search path and so must specify the schema to use. Should we instead avoid clearing the search path for this? What are the security implications of using the first schema of the search path?\n\nWhen called as `pg_amcheck --install-missing`, the implementation in the attached patch runs per database being checked a \"create extension if not exists amcheck with schema public\". If called as `pg_amcheck --install-missing=foo`, it instead runs \"create extension if not exists amcheck with schema foo` having escaped \"foo\" appropriately for the given database. There is no option to use different schemas for different databases. Nor is there any option to use the search path. If somebody needs that, I think they can manage installing amcheck themselves.\n\nDoes this meet your needs for v14? I'd like to get this nailed down quickly, as it is unclear to me that we should even be doing this so late in the development cycle.\n\nI'd also like your impressions on whether we're likely to move contrib/amcheck into core anytime soon. If so, is it worth adding an option that we'll soon need to deprecate?\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sat, 17 Apr 2021 12:43:21 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_amcheck option to install extension" }, { "msg_contents": "\nOn 4/17/21 3:43 PM, Mark Dilger wrote:\n>\n>> On Apr 16, 2021, at 11:06 AM, Andrew Dunstan <andrew@dunslane.net> wrote:\n>>\n>>\n>> Hi,\n>>\n>> Peter Geoghegan suggested that I have the cross version upgrade checker\n>> run pg_amcheck on the upgraded module. This seemed to me like a good\n>> idea, so I tried it, only to find that it refuses to run unless the\n>> amcheck extension is installed. That's fair enough, but it also seems to\n>> me like we should have an option to have pg_amcheck install the\n>> extension is it's not present, by running something like 'create\n>> extension if not exists amcheck'. Maybe in such a case there could also\n>> be an option to drop the extension when pg_amcheck's work is done - I\n>> haven't thought through all the implications.\n>>\n>> Given pg_amcheck is a new piece of work I'm not sure if we can sneak\n>> this in under the wire for release 14. I will certainly undertake to\n>> review anything expeditiously. I can work around this issue in the\n>> buildfarm, but it seems like something other users are likely to want.\n> We cannot quite use a \"create extension if not exists amcheck\" command, as we clear the search path and so must specify the schema to use. Should we instead avoid clearing the search path for this? What are the security implications of using the first schema of the search path?\n>\n> When called as `pg_amcheck --install-missing`, the implementation in the attached patch runs per database being checked a \"create extension if not exists amcheck with schema public\". If called as `pg_amcheck --install-missing=foo`, it instead runs \"create extension if not exists amcheck with schema foo` having escaped \"foo\" appropriately for the given database. There is no option to use different schemas for different databases. Nor is there any option to use the search path. If somebody needs that, I think they can manage installing amcheck themselves.\n\n\n\nhow about specifying pg_catalog as the schema instead of public?\n\n\n>\n> Does this meet your needs for v14? I'd like to get this nailed down quickly, as it is unclear to me that we should even be doing this so late in the development cycle.\n\n\nI'm ok with or without - I'll just have the buildfarm client pull a list\nof databases and install the extension in all of them.\n\n\n>\n> I'd also like your impressions on whether we're likely to move contrib/amcheck into core anytime soon. If so, is it worth adding an option that we'll soon need to deprecate?\n\n\nI think if it stays as an extension it will stay in contrib. But it sure\nfeels very odd to have a core bin program that relies on a contrib\nextension. It seems one or the other is misplaced.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 18 Apr 2021 09:19:04 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: pg_amcheck option to install extension" }, { "msg_contents": "> On Apr 18, 2021, at 6:19 AM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> how about specifying pg_catalog as the schema instead of public?\n\nDone.\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sun, 18 Apr 2021 14:58:37 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_amcheck option to install extension" }, { "msg_contents": "On 2021-Apr-18, Andrew Dunstan wrote:\n\n> On 4/17/21 3:43 PM, Mark Dilger wrote:\n\n> > I'd also like your impressions on whether we're likely to move\n> > contrib/amcheck into core anytime soon. If so, is it worth adding\n> > an option that we'll soon need to deprecate?\n> \n> I think if it stays as an extension it will stay in contrib. But it sure\n> feels very odd to have a core bin program that relies on a contrib\n> extension. It seems one or the other is misplaced.\n\nI've proposed in the past that we should have a way to provide\nextensions other than contrib -- specifically src/extensions/ -- and\nthen have those extensions installed together with the rest of core.\nThen it would be perfectly legitimate to have src/bin/pg_amcheck that\ndepending that extension. I agree that the current situation is not\ngreat.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Thou shalt not follow the NULL pointer, for chaos and madness await\nthee at its end.\" (2nd Commandment for C programmers)\n\n\n", "msg_date": "Sun, 18 Apr 2021 19:32:40 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pg_amcheck option to install extension" }, { "msg_contents": "\nOn 4/18/21 7:32 PM, Alvaro Herrera wrote:\n> On 2021-Apr-18, Andrew Dunstan wrote:\n>\n>> On 4/17/21 3:43 PM, Mark Dilger wrote:\n>>> I'd also like your impressions on whether we're likely to move\n>>> contrib/amcheck into core anytime soon. If so, is it worth adding\n>>> an option that we'll soon need to deprecate?\n>> I think if it stays as an extension it will stay in contrib. But it sure\n>> feels very odd to have a core bin program that relies on a contrib\n>> extension. It seems one or the other is misplaced.\n> I've proposed in the past that we should have a way to provide\n> extensions other than contrib -- specifically src/extensions/ -- and\n> then have those extensions installed together with the rest of core.\n> Then it would be perfectly legitimate to have src/bin/pg_amcheck that\n> depending that extension. I agree that the current situation is not\n> great.\n>\n\n\nOK, so let's fix it. If amcheck is going to stay in contrib then ISTM\npg_amcheck should move there. I can organize that if there's agreement.\nOr else let's move amcheck as Alvaro suggests.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 19 Apr 2021 12:32:41 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: pg_amcheck option to install extension" }, { "msg_contents": "\n\n> On Apr 19, 2021, at 9:32 AM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> \n> On 4/18/21 7:32 PM, Alvaro Herrera wrote:\n>> On 2021-Apr-18, Andrew Dunstan wrote:\n>> \n>>> On 4/17/21 3:43 PM, Mark Dilger wrote:\n>>>> I'd also like your impressions on whether we're likely to move\n>>>> contrib/amcheck into core anytime soon. If so, is it worth adding\n>>>> an option that we'll soon need to deprecate?\n>>> I think if it stays as an extension it will stay in contrib. But it sure\n>>> feels very odd to have a core bin program that relies on a contrib\n>>> extension. It seems one or the other is misplaced.\n>> I've proposed in the past that we should have a way to provide\n>> extensions other than contrib -- specifically src/extensions/ -- and\n>> then have those extensions installed together with the rest of core.\n>> Then it would be perfectly legitimate to have src/bin/pg_amcheck that\n>> depending that extension. I agree that the current situation is not\n>> great.\n>> \n> \n> \n> OK, so let's fix it. If amcheck is going to stay in contrib then ISTM\n> pg_amcheck should move there. I can organize that if there's agreement.\n> Or else let's move amcheck as Alvaro suggests.\n\nAh, no. I wrote pg_amcheck in contrib originally, and moved it to src/bin as requested during the v14 development cycle.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 19 Apr 2021 09:37:18 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_amcheck option to install extension" }, { "msg_contents": "On Mon, Apr 19, 2021 at 12:37 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > OK, so let's fix it. If amcheck is going to stay in contrib then ISTM\n> > pg_amcheck should move there. I can organize that if there's agreement.\n> > Or else let's move amcheck as Alvaro suggests.\n>\n> Ah, no. I wrote pg_amcheck in contrib originally, and moved it to src/bin as requested during the v14 development cycle.\n\nYeah, I am not that excited about moving this again. I realize it was\nnever committed anywhere else, but it was moved at least one during\ndevelopment. And I don't see that moving it to contrib really fixes\nanything anyway here, except perhaps conceptually. Maybe inventing\nsrc/extensions is the right idea, but there's no real need to do that\nat this point in the release cycle, and it doesn't actually fix\nanything either. The only thing that's really needed here is to either\n(a) teach the test script to install amcheck as a separate step or (b)\nteach pg_amcheck to install amcheck in a user-specified schema. If we\ndo that, AIUI, this issue is fixed regardless of whether we move any\nsource code around, and if we don't do that, AIUI, this issue is not\nfixed no matter how much source code we move.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 19 Apr 2021 12:52:23 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_amcheck option to install extension" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> OK, so let's fix it. If amcheck is going to stay in contrib then ISTM\n> pg_amcheck should move there. I can organize that if there's agreement.\n> Or else let's move amcheck as Alvaro suggests.\n\nFWIW, I think that putting them both in contrib makes the most\nsense from a structural standpoint.\n\nEither way, though, you'll still need the proposed option to\nlet the executable issue a CREATE EXTENSION to get the shlib\nloaded. Unless somebody is proposing that the extension be\ninstalled-by-default like plpgsql, and that I am unequivocally\nnot for.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Apr 2021 12:53:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_amcheck option to install extension" }, { "msg_contents": "\n\n> On Apr 19, 2021, at 9:53 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> OK, so let's fix it. If amcheck is going to stay in contrib then ISTM\n>> pg_amcheck should move there. I can organize that if there's agreement.\n>> Or else let's move amcheck as Alvaro suggests.\n> \n> FWIW, I think that putting them both in contrib makes the most\n> sense from a structural standpoint.\n\nThat was my original thought also, largely from a package management perspective. Just as an example, postgresql-client and postgresql-contrib are separate rpms. There isn't much point to having pg_amcheck installed as part of the postgresql-client package while having amcheck in the postgresql-contrib package which might not be installed.\n\nA counter argument is that amcheck is server side, and pg_amcheck is client side. Having pg_amcheck installed on a system makes sense if you are connecting to a server on a different system.\n\n> On Mar 11, 2021, at 12:12 AM, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> I want to register, if we are going to add this, it ought to be in src/bin/. If we think it's a useful tool, it should be there with all the other useful tools.\n> \n> I realize there is a dependency on a module in contrib, and it's probably now not the time to re-debate reorganizing contrib. But if we ever get to that, this program should be the prime example why the current organization is problematic, and we should be prepared to make the necessary moves then.\n\nThis was the request that motivated the move to src/bin.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 19 Apr 2021 10:25:21 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_amcheck option to install extension" }, { "msg_contents": "\nOn 4/19/21 1:25 PM, Mark Dilger wrote:\n>\n>> On Apr 19, 2021, at 9:53 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>> OK, so let's fix it. If amcheck is going to stay in contrib then ISTM\n>>> pg_amcheck should move there. I can organize that if there's agreement.\n>>> Or else let's move amcheck as Alvaro suggests.\n>> FWIW, I think that putting them both in contrib makes the most\n>> sense from a structural standpoint.\n> That was my original thought also, largely from a package management perspective. Just as an example, postgresql-client and postgresql-contrib are separate rpms. There isn't much point to having pg_amcheck installed as part of the postgresql-client package while having amcheck in the postgresql-contrib package which might not be installed.\n>\n> A counter argument is that amcheck is server side, and pg_amcheck is client side. Having pg_amcheck installed on a system makes sense if you are connecting to a server on a different system.\n\n\nThere are at least two other client side programs in contrib. So this\nargument doesn't quite hold water from a consistency POV.\n\n\n\n>\n>> On Mar 11, 2021, at 12:12 AM, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> I want to register, if we are going to add this, it ought to be in src/bin/. If we think it's a useful tool, it should be there with all the other useful tools.\n>>\n>> I realize there is a dependency on a module in contrib, and it's probably now not the time to re-debate reorganizing contrib. But if we ever get to that, this program should be the prime example why the current organization is problematic, and we should be prepared to make the necessary moves then.\n> This was the request that motivated the move to src/bin.\n>\n\n\nI missed that, so I guess maybe I can't complain too loudly. But if I'd\nseen it I would have disagreed. :-)\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 19 Apr 2021 14:54:58 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: pg_amcheck option to install extension" }, { "msg_contents": "On Mon, Apr 19, 2021 at 12:53:29PM -0400, Tom Lane wrote:\n> FWIW, I think that putting them both in contrib makes the most\n> sense from a structural standpoint.\n> \n> Either way, though, you'll still need the proposed option to\n> let the executable issue a CREATE EXTENSION to get the shlib\n> loaded. Unless somebody is proposing that the extension be\n> installed-by-default like plpgsql, and that I am unequivocally\n> not for.\n\nAgreed. Something like src/extensions/ would be a tempting option,\nbut I don't think that it is a good idea to introduce a new piece of\ninfrastructure at this stage, so moving both to contrib/ would be the\nbest balance with the current pieces at hand.\n--\nMichael", "msg_date": "Tue, 20 Apr 2021 10:41:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_amcheck option to install extension" }, { "msg_contents": "\n\n> On Apr 19, 2021, at 6:41 PM, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Mon, Apr 19, 2021 at 12:53:29PM -0400, Tom Lane wrote:\n>> FWIW, I think that putting them both in contrib makes the most\n>> sense from a structural standpoint.\n>> \n>> Either way, though, you'll still need the proposed option to\n>> let the executable issue a CREATE EXTENSION to get the shlib\n>> loaded. Unless somebody is proposing that the extension be\n>> installed-by-default like plpgsql, and that I am unequivocally\n>> not for.\n> \n> Agreed. Something like src/extensions/ would be a tempting option,\n> but I don't think that it is a good idea to introduce a new piece of\n> infrastructure at this stage, so moving both to contrib/ would be the\n> best balance with the current pieces at hand.\n\nThere is another issue to consider. Installing pg_amcheck in no way opens up an avenue of attack that I can see. It is just a client application with no special privileges. But installing amcheck arguably opens a line of attack; not one as significant as installing pageinspect, but of the same sort. Amcheck allows privileged database users to potentially get information from the tables that would otherwise be invisible even to them according to mvcc rules. (Is this already the case via some other functionality? Maybe this security problem already exists?) If the privileged database user has file system access, then this is not at all concerning, since they can already just open the files in a tool of their choice, but I don't see any reason why installations should require that privileged database users also be privileged to access the file system.\n\nIf you are not buying my argument here, perhaps a reference to how encryption functions are evaluated might help you see my point of view. You don't ask, \"can the attacker recover the plain text from the encrypted text\", but rather, \"can the attacker tell the difference between encrypted plain text and encrypted random noise.\" That's because it is incredibly hard to reason about what an attacker might be able to learn. Even though learning about old data using amcheck would be hard, you can't say that an attacker would never be able to recover information about deleted rows. As such, security conscious installations are within reason to refuse to install it.\n\nSince amcheck (and to a much larger extent, pageinspect) open potential data leakage issues, it makes sense for some security conscious sites to refuse to install it. pg_amcheck on the other hand could be installed everywhere. I understand why it might *feel* like pg_amcheck and amcheck have to both be installed to work, but I don't think that point of view makes much sense in reality. The computer running the client and the computer running the server are frequently not the same computer.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 19 Apr 2021 19:15:23 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_amcheck option to install extension" }, { "msg_contents": "On Mon, Apr 19, 2021 at 07:15:23PM -0700, Mark Dilger wrote:\n> There is another issue to consider. Installing pg_amcheck in no way\n> opens up an avenue of attack that I can see. It is just a client\n> application with no special privileges. But installing amcheck\n> arguably opens a line of attack; not one as significant as\n> installing pageinspect, but of the same sort. Amcheck allows\n> privileged database users to potentially get information from the\n> tables that would otherwise be invisible even to them according to\n> mvcc rules. (Is this already the case via some other functionality?\n> Maybe this security problem already exists?) If the privileged\n> database user has file system access, then this is not at all\n> concerning, since they can already just open the files in a tool of\n> their choice, but I don't see any reason why installations should\n> require that privileged database users also be privileged to access\n> the file system.\n\nBy default, any functions deployed with amcheck have their execution\nrights revoked from public, meaning that only a superuser can run them\nwith a default installation. A non-superuser could execute them only\nonce GRANT'd access to them.\n--\nMichael", "msg_date": "Tue, 20 Apr 2021 12:06:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_amcheck option to install extension" }, { "msg_contents": "\n\n> On Apr 19, 2021, at 8:06 PM, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Mon, Apr 19, 2021 at 07:15:23PM -0700, Mark Dilger wrote:\n>> There is another issue to consider. Installing pg_amcheck in no way\n>> opens up an avenue of attack that I can see. It is just a client\n>> application with no special privileges. But installing amcheck\n>> arguably opens a line of attack; not one as significant as\n>> installing pageinspect, but of the same sort. Amcheck allows\n>> privileged database users to potentially get information from the\n>> tables that would otherwise be invisible even to them according to\n>> mvcc rules. (Is this already the case via some other functionality?\n>> Maybe this security problem already exists?) If the privileged\n>> database user has file system access, then this is not at all\n>> concerning, since they can already just open the files in a tool of\n>> their choice, but I don't see any reason why installations should\n>> require that privileged database users also be privileged to access\n>> the file system.\n> \n> By default, any functions deployed with amcheck have their execution\n> rights revoked from public, meaning that only a superuser can run them\n> with a default installation. A non-superuser could execute them only\n> once GRANT'd access to them.\n\nCorrect. So having amcheck installed on the system provides the database superuser with a privilege escalation attack vector. I am assuming here the database superuser is not a privileged system user, and can only log in remotely, has no direct access to the file system, etc.\n\nAlice has a database with sensitive data. She hires Bob to be her new database admin, with superuser privilege, but doesn't want Bob to see the sensitive data, so she deletes it first. The data is dead but still on disk.\n\nBob discovers a bug in postgres that will corrupt dead rows that some index is still pointing at. This attack requires sufficient privilege to trigger the bug, but presumably he has that much privilege, because he is a database superuser. Let's call this attack C(x) where \"C\" means the corruption inducing function, and \"x\" is the indexed key for which dead rows will be corrupted.\n\nBob runs \"CREATE EXTENSION amcheck\", and then successively runs C(x) followed by amcheck for each interesting value of x. Bob learns which of these values were in the system before Alice deleted them.\n\nThis is a classic privilege escalation attack. Bob has one privilege, and uses it to get another.\n\nAlice might foresee this behavior from Bob and choose not to install contrib/amcheck. This is paranoid on her part, but does not cross the line into insanity.\n\nThe postgres community has every reason to keep amcheck in contrib so that users such as Alice can make this decision.\n\nNo similar argument has been put forward for why pg_amcheck should be kept in contrib.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 19 Apr 2021 20:39:06 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_amcheck option to install extension" }, { "msg_contents": "On Mon, Apr 19, 2021 at 08:39:06PM -0700, Mark Dilger wrote:\n> This is a classic privilege escalation attack. Bob has one\n> privilege, and uses it to get another.\n\nBob is a superuser, so it has all the privileges of the world for this\ninstance. In what is that different from BASE_BACKUP or just COPY\nFROM PROGRAM?\n\nI am not following your argument here.\n--\nMichael", "msg_date": "Tue, 20 Apr 2021 13:22:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_amcheck option to install extension" }, { "msg_contents": "\n\n> On Apr 19, 2021, at 9:22 PM, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Mon, Apr 19, 2021 at 08:39:06PM -0700, Mark Dilger wrote:\n>> This is a classic privilege escalation attack. Bob has one\n>> privilege, and uses it to get another.\n> \n> Bob is a superuser, so it has all the privileges of the world for this\n> instance. In what is that different from BASE_BACKUP or just COPY\n> FROM PROGRAM?\n\nI think you are conflating the concept of an operating system adminstrator with the concept of the database superuser/owner. If the operating system user that postgres is running as cannot execute any binaries, then \"copy from program\" is not a way for a database admistrator to escape the jail. If Bob does not have ssh access to the system, he cannot run pg_basebackup. \n\n> I am not following your argument here.\n\nThe argument is that the operating system user that postgres is running as, perhaps user \"postgres\", can read the files in the $PGDATA directory, but Bob can only see the MVCC view of the data, not the raw data. Installing contrib/amcheck allows Bob to get a peak behind the curtain.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 19 Apr 2021 22:31:18 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_amcheck option to install extension" }, { "msg_contents": "On Mon, Apr 19, 2021 at 10:31:18PM -0700, Mark Dilger wrote:\n> I think you are conflating the concept of an operating system\n> adminstrator with the concept of the database superuser/owner. If\n> the operating system user that postgres is running as cannot execute\n> any binaries, then \"copy from program\" is not a way for a database\n> admistrator to escape the jail. If Bob does not have ssh access to\n> the system, he cannot run pg_basebackup. \n\nYou don't need much to be able to take a base backup once you have a\nconnection to the backend as long as your have the permissions to do\nso. In this case that would be just the replication permissions.\n\n> The argument is that the operating system user that postgres is\n> running as, perhaps user \"postgres\", can read the files in the\n> $PGDATA directory, but Bob can only see the MVCC view of the data,\n> not the raw data. Installing contrib/amcheck allows Bob to get a\n> peak behind the curtain.\n\nIn my world, a superuser is considered as an entity holding the same\nrights as the OS user running the PostgreSQL instance, so that's wider\nthan the definition you are thinking of here. There are many fancy\nthings one can do in this case, just to name a few that give access to\nany files of the data directory or even other paths:\n- pg_read_file(), and take the equivalent of a base backup with a\nRECURSIVE CTE.\n- BASE_BACKUP, doable from a simple psql session with a replication\nconnection.\n- Untrusted languages.\n\nSo I don't understand your argument with amcheck here because any of\nits features *requires* superuser rights unless granted. Coming back\nto your example, Alice actually gave up the control of her database to\nBob the moment she gave him superuser rights. If she really wanted to\nprotect her privacy, she'd better think about a more restricted set of\nACLs for Bob before letting him manage her data, even if she considers\nherself \"safe\" after she deleted it, but she's really not safe by any\nmeans. I still stand with the point of upthread to put all that in\ncontrib/ for now, without discarding that this could be moved\nsomewhere else in the future.\n--\nMichael", "msg_date": "Tue, 20 Apr 2021 16:37:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_amcheck option to install extension" }, { "msg_contents": "On Mon, Apr 19, 2021 at 2:55 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> There are at least two other client side programs in contrib. So this\n> argument doesn't quite hold water from a consistency POV.\n\nI thought that at first, too. But then I realized that those programs\nare oid2name and vacuumlo. And oid2name, at least, seems like\nsomething we ought to just consider removing. It's unclear why this is\nsomething that really deserves a command-line utility rather than just\nsome additional psql options or something. Does anyone really use it?\n\nvacuumlo isn't that impressive either, since it makes the very tenuous\nassumption that an oid column is intended to reference a large object,\nand the documentation doesn't even acknowledge what a shaky idea that\nactually is. But I suspect it has much better chances of being useful\nin practice than oid2name. In fact, I've heard of people using it and,\nI think, finding it useful, so we probably don't want to just nuke it.\n\nBut the point is, as things stand today, almost everything in contrib\nis an extension, not a binary. And we might want to view the\nexceptions as loose ends to be cleaned up, rather than a pattern to\nemulate.\n\nIt's a judgement call, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 20 Apr 2021 08:47:33 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_amcheck option to install extension" }, { "msg_contents": "On Tue, Apr 20, 2021 at 2:47 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Apr 19, 2021 at 2:55 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> > There are at least two other client side programs in contrib. So this\n> > argument doesn't quite hold water from a consistency POV.\n>\n> I thought that at first, too. But then I realized that those programs\n> are oid2name and vacuumlo. And oid2name, at least, seems like\n> something we ought to just consider removing. It's unclear why this is\n> something that really deserves a command-line utility rather than just\n> some additional psql options or something. Does anyone really use it?\n\nYeah, this seems like it could relatively simply just be a SQL query in psql.\n\n> vacuumlo isn't that impressive either, since it makes the very tenuous\n> assumption that an oid column is intended to reference a large object,\n> and the documentation doesn't even acknowledge what a shaky idea that\n> actually is. But I suspect it has much better chances of being useful\n> in practice than oid2name. In fact, I've heard of people using it and,\n> I think, finding it useful, so we probably don't want to just nuke it.\n\nYes, I've definitely run into using vacuumlo many times.\n\n\n> But the point is, as things stand today, almost everything in contrib\n> is an extension, not a binary. And we might want to view the\n> exceptions as loose ends to be cleaned up, rather than a pattern to\n> emulate.\n\nI could certainly sign up for moving vacuumlo to bin/ and replacing\noid2name with something in psql for example.\n\n(But yes, I realize this rapidly turns into another instance of the\nbikeshedding about the future of contrib..)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Tue, 20 Apr 2021 14:54:07 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: pg_amcheck option to install extension" }, { "msg_contents": "On Tue, Apr 20, 2021 at 1:31 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I think you are conflating the concept of an operating system adminstrator with the concept of the database superuser/owner.\n\nYou should conflate those things, because there's no meaningful\nprivilege boundary between them:\n\nhttp://rhaas.blogspot.com/2020/12/cve-2019-9193.html\n\nIf reading the whole thing is too much, scroll down to the part in\nfixed-width font and behold me trivially compromising the OS account\nusing plperlu.\n\nI actually think this is a design error on our part. A lot of people,\napparently including you, feel that there should be a privilege\nboundary between the PostgreSQL superuser and the OS user, or want\nsuch a boundary to exist. It would be quite useful if there were a\nboundary there, because it's entirely reasonable to want to have a\nuser who is allowed to do everything with the database except escape\ninto the OS account, and I can't think of any reason why we couldn't\nset things up so that this is possible. We'd have to bar some things\nthat the superuser can currently do, like directly modify system\ntables and use COPY TO/FROM PROGRAM, but there's a lot of things we\ncould allow too, like reading all the data and creating and deleting\naccounts and setting their permissions arbitrarily, except maybe for\nany special super-DUPER users who are allowed to do things that escape\nthe sandbox.\n\nNow it would take a fair amount of work to make that distinction in a\nrigorous way and figure out exactly what the design ought to be, and\nI'm not volunteering. But I bet a lot of people would like it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 20 Apr 2021 08:54:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_amcheck option to install extension" }, { "msg_contents": "On 2021-Apr-20, Michael Paquier wrote:\n\n> Agreed. Something like src/extensions/ would be a tempting option,\n> but I don't think that it is a good idea to introduce a new piece of\n> infrastructure at this stage, so moving both to contrib/ would be the\n> best balance with the current pieces at hand.\n\nActually I think the best balance would be to leave things where they\nare, and move amcheck to src/extensions/ once the next devel cycle\nopens. That way, we avoid the (pretty much pointless) laborious task of\nmoving pg_amcheck to contrib only to move it back on the next cycle.\n\nWhat I'm afraid of, if we move pg_amcheck to contrib, is that during the\nnext cycle people will say that they are both perfectly fine in contrib/\nand so we don't need to move anything anywhere. And next time someone\nwants to create a new extension that would be perfectly fine in core,\nthey will not want to have that one be the one that creates\nsrc/extensions/, because then that in itself is a contentious point that\ncan get the whole patch rejected.\n\nIn a sense, what I'm doing is support the idea that \"incremental\ndevelopment\" applies to procedure too.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Tue, 20 Apr 2021 10:51:40 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pg_amcheck option to install extension" }, { "msg_contents": "\nOn 4/20/21 8:47 AM, Robert Haas wrote:\n> On Mon, Apr 19, 2021 at 2:55 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> There are at least two other client side programs in contrib. So this\n>> argument doesn't quite hold water from a consistency POV.\n> I thought that at first, too. But then I realized that those programs\n> are oid2name and vacuumlo. And oid2name, at least, seems like\n> something we ought to just consider removing. It's unclear why this is\n> something that really deserves a command-line utility rather than just\n> some additional psql options or something. Does anyone really use it?\n>\n> vacuumlo isn't that impressive either, since it makes the very tenuous\n> assumption that an oid column is intended to reference a large object,\n> and the documentation doesn't even acknowledge what a shaky idea that\n> actually is. But I suspect it has much better chances of being useful\n> in practice than oid2name. In fact, I've heard of people using it and,\n> I think, finding it useful, so we probably don't want to just nuke it.\n>\n> But the point is, as things stand today, almost everything in contrib\n> is an extension, not a binary. And we might want to view the\n> exceptions as loose ends to be cleaned up, rather than a pattern to\n> emulate.\n>\n> It's a judgement call, though.\n>\n\n\nYeah. I'll go along with Alvaro and say let's let sleeping dogs lie at\nthis stage of the dev process, and pick the discussion up after we branch.\n\n\nI will just note one thing: the binaries in contrib have one important\nfunction that hasn't been mentioned, namely to test using pgxs to build\nbinaries. That doesn't have to live in contrib, but we should have\nsomething that does that somewhere in the build process, so if we\nremmove oid2name and vacuumlo from contrib we need to look into that.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 20 Apr 2021 11:08:48 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: pg_amcheck option to install extension" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Actually I think the best balance would be to leave things where they\n> are, and move amcheck to src/extensions/ once the next devel cycle\n> opens. That way, we avoid the (pretty much pointless) laborious task of\n> moving pg_amcheck to contrib only to move it back on the next cycle.\n\n> What I'm afraid of, if we move pg_amcheck to contrib, is that during the\n> next cycle people will say that they are both perfectly fine in contrib/\n> and so we don't need to move anything anywhere.\n\nIndeed. But I'm down on this idea of inventing src/extensions,\nbecause then there will constantly be questions about whether FOO\nbelongs in contrib/ or src/extensions/. Unless we just move\neverything there, and then the question becomes why bother. Sure,\n\"contrib\" is kind of a legacy name, but PG is full of legacy names.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 20 Apr 2021 11:09:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_amcheck option to install extension" }, { "msg_contents": "\n\n> On Apr 20, 2021, at 5:54 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Tue, Apr 20, 2021 at 1:31 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> I think you are conflating the concept of an operating system adminstrator with the concept of the database superuser/owner.\n> \n> You should conflate those things, because there's no meaningful\n> privilege boundary between them:\n\nThis discussion started in response to the idea that pg_amcheck needs to be moved into contrib, presumably because that's where amcheck lives. I am arguing against the move.\n\nThe actual use case I have in mind is \"Postgres as a service\", where a company (Alice) rents space in the cloud and runs postgres databases which can be rented out to a tenant (Bob) who is the owner of his database, but not privileged on the underlying system in any way. Bob has enough privileges to run CREATE EXTENSION, but is limited to the extensions that Alice has made available. Alice evaluates packages and chooses not to install most of them, including amcheck. Or perhaps Alice chooses not to install any contrib modules. Either way, the location of amcheck in contrib is useful to Alice because it makes her choice not to install it very simple.\n\nBob, however, is connecting to databases provided by Alice, and is not trying to limit himself. He is happy to have the pg_amcheck client installed. If Alice's databases don't allow him to run amcheck, pg_amcheck is not useful relative to those databases, but perhaps Bob is also renting database space from Charlie and Charlie's databases have amcheck installed.\n\nNow, the question is, \"In which postgres package does Bob think pg_amcheck should reside?\" It would be strange to say that Bob needs to install the postgresql-contrib rpm in order to get the pg_amcheck client. That rpm is mostly a bunch of modules, and may even have a package dependency on postgresql-server. Bob doesn't want either of those. He just wants the clients.\n\n\n\nThe discussion about using amcheck as a privilege escalation attack was mostly to give some background for why Alice might not want to install amcheck. I think it got a bit out of hand, in no small part because I was being imprecise about Bob's exact privilege levels. I was being imprecise about that part because my argument wasn't \"here's how to leverage amcheck to exploit postgres\", but rather, \"here's what Alice might rationally be concerned about.\" To run CREATE EXTENSION does not actually require superuser privileges. It depends on the package. At the moment, you can't load amcheck without superuser privileges, but you can load some others, such as intarray:\n\nbob=> create extension amcheck;\n2021-04-20 07:40:46.758 PDT [80340] ERROR: permission denied to create extension \"amcheck\"\n2021-04-20 07:40:46.758 PDT [80340] HINT: Must be superuser to create this extension.\n2021-04-20 07:40:46.758 PDT [80340] STATEMENT: create extension amcheck;\nERROR: permission denied to create extension \"amcheck\"\nHINT: Must be superuser to create this extension.\nbob=> create extension intarray;\nCREATE EXTENSION\nbob=> \n\nAlice might prefer to avoid installing amcheck altogether, not wanting to have to evaluate on each upgrade whether the privileges necessary to load amcheck have changed.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\n", "msg_date": "Tue, 20 Apr 2021 08:33:52 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_amcheck option to install extension" }, { "msg_contents": "\nOn 4/20/21 11:09 AM, Tom Lane wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> Actually I think the best balance would be to leave things where they\n>> are, and move amcheck to src/extensions/ once the next devel cycle\n>> opens. That way, we avoid the (pretty much pointless) laborious task of\n>> moving pg_amcheck to contrib only to move it back on the next cycle.\n>> What I'm afraid of, if we move pg_amcheck to contrib, is that during the\n>> next cycle people will say that they are both perfectly fine in contrib/\n>> and so we don't need to move anything anywhere.\n> Indeed. But I'm down on this idea of inventing src/extensions,\n> because then there will constantly be questions about whether FOO\n> belongs in contrib/ or src/extensions/. Unless we just move\n> everything there, and then the question becomes why bother. Sure,\n> \"contrib\" is kind of a legacy name, but PG is full of legacy names.\n>\n> \t\t\t\n\n\n\nI think the distinction I would draw is between things we would expect\nto be present in every Postgres installation (e.g. pg_stat_statements,\nauto_explain, postgres_fdw, hstore) and things we don't for one reason\nor another (e.g. pgcrypto, adminpack)\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 20 Apr 2021 12:00:56 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: pg_amcheck option to install extension" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 4/20/21 11:09 AM, Tom Lane wrote:\n>> Indeed. But I'm down on this idea of inventing src/extensions,\n>> because then there will constantly be questions about whether FOO\n>> belongs in contrib/ or src/extensions/.\n\n> I think the distinction I would draw is between things we would expect\n> to be present in every Postgres installation (e.g. pg_stat_statements,\n> auto_explain, postgres_fdw, hstore) and things we don't for one reason\n> or another (e.g. pgcrypto, adminpack)\n\nI dunno, that division appears quite arbitrary and endlessly\nbikesheddable. It's something I'd prefer not to spend time\narguing about, but the only way we won't have such arguments\nis if we don't make the distinction in the first place.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 20 Apr 2021 12:04:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_amcheck option to install extension" }, { "msg_contents": "On Tue, Apr 20, 2021 at 12:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I think the distinction I would draw is between things we would expect\n> > to be present in every Postgres installation (e.g. pg_stat_statements,\n> > auto_explain, postgres_fdw, hstore) and things we don't for one reason\n> > or another (e.g. pgcrypto, adminpack)\n>\n> I dunno, that division appears quite arbitrary and endlessly\n> bikesheddable.\n\n+1. I wouldn't expect those things to be present in every\ninstallation, for sure. I don't know that I've *ever* seen a customer\nuse hstore. If I have, it wasn't often. There's no way we'll ever get\nconsensus on which stuff people use, because it's different depending\non what customers you work with.\n\nThe stuff I feel bad about is stuff like 'isn' and 'earthdistance' and\n'intarray', which are basically useless toys with low code quality.\nYou'd hate for people to confuse that with stuff like 'dblink' or\n'pgcrypto' which might actually be useful. But there's a big, broad\nfuzzy area in the middle where everyone is going to have different\nopinions. And even things like 'isn' and 'earthdistance' and\n'intarray' may well have defenders, either because somebody thinks\nit's valuable as a coding example, or because somebody really did use\nit in anger and had success.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 20 Apr 2021 12:56:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_amcheck option to install extension" }, { "msg_contents": "\n\n> On Apr 20, 2021, at 5:54 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Tue, Apr 20, 2021 at 1:31 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> I think you are conflating the concept of an operating system adminstrator with the concept of the database superuser/owner.\n> \n> You should conflate those things, because there's no meaningful\n> privilege boundary between them:\n\nI understand why you say so, but I think the situation is more nuanced than that.\n\n> http://rhaas.blogspot.com/2020/12/cve-2019-9193.html\n> \n> If reading the whole thing is too much, scroll down to the part in\n> fixed-width font and behold me trivially compromising the OS account\n> using plperlu.\n\nI think the question here is whether PostgreSQL is inherently insecure, meaning that it cannot function unless installed in a way that would allow the database superuser Bob to compromise the OS administered by Alice.\n\nMagnus seems to object even to this formulation in his blog post, https://blog.hagander.net/when-a-vulnerability-is-not-a-vulnerability-244/, saying \"a common setup is to only allow the postgres OS user itself to act as superuser, in which case there is no escalation at all.\" He seems to view Bob taking over the OS account as nothing more than Alice taking over her own account, since nobody but Alice should ever be able to log in as Bob. At a minimum, I think that means that Alice must trust PostgreSQL to contain zero exploits. If database user Charlie can escalate his privileges to the level of Bob, then Alice has a big problem. Assuming Alice is an average prudent system administrator, she doesn't really want to trust that PostgreSQL is completely exploit free. She just wants to quarantine it enough that she can sleep at night.\n\nI think we have made the situation for Alice a bit difficult. She needs to make sure that whichever user the backend runs as does not have permission to access anything beyond the PGDATA directory and a handful of postgres binaries, otherwise Bob, and perhaps Charlie, can access them. She can do this most easily with containers, or at least it seems so to me. The only binaries that should be executable from within the container are \"postgres\", \"locale\", and whichever hardened archive command, recovery command, and restore command Alice wants to allow. The only shell that should be executable from within the container should be minimal, maybe something custom written by Alice that only works to recognize the very limited set of commands Alice wants to allow and then forks/execs those commands without allowing any further shell magic. \"Copy to program\" and \"copy from program\" internally call popen, which calls the shell, and if Alice's custom shell doesn't offer to pipe anything to the target program, Bob can't really do anything that way. \"locale -a\" doesn't seem particularly vulnerable to being fed garbage, and in any event, Alice's custom shell doesn't have to implement the pipe stream logic in that direction. She could make it unidirectional from `locale -a` back to postgres. The archive, recovery, and restore commands are internally invoked using system() which calls those commands indirectly using Alice's shell. Once again, she could write the shell to not pipe anything in either direction, which pretty well prevents Bob from doing anything malicious with them.\n\nReading and writing postgresql data files seems a much trickier problem. The \"copy to file\" and \"copy from file\" implementations don't go through the shell, and Alice can't deny the database reading or writing the data directory, so there doesn't seem to be any quarantine trick that will work. Bob can copy arbitrary malicious content to or from that directory. I don't see how this gets Bob any closer to compromising the OS account, though. All Bob is doing is messing up his own database. Even if corrupting these files convinces the postgres backend to attempt to write somewhere else in the system, the container should be sufficient to prevent it from actually succeeding outside its own data directory.\n\nThe issue of the pg_read_file() sql function, and similar functions, would seem to fall into the same category as \"copy to file\" and \"copy from file\". Bob can read and write his own data directory, but not anything else, assuming Alice set up the container properly.\n\n> I actually think this is a design error on our part. A lot of people,\n> apparently including you, feel that there should be a privilege\n> boundary between the PostgreSQL superuser and the OS user, or want\n> such a boundary to exist. \n\nI'm arguing that the boundary does currently (almost) exist, but is violated by default, easy to further violate without realizing you are doing so, inconvenient and hard to maintain in practice, requires segregating the database superuser from whichever adminstrator(s) execute other tools, requires being paranoid when running such tools against the database because any content found therein could have been maliciously corrupted by the database administrator in a way that you are not expecting, requires a container or chroot jail and a custom shell, and this whole mess should not be made any more difficult.\n\nWe could make this incrementally easier by finding individual problems which have solutions generally acceptable to the community and tackling them one at a time. I don't see there will be terribly many such solutions, though, if the community sees no value in putting a boundary between Bob and Alice.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 20 Apr 2021 15:00:39 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Privilege boundary between sysadmin and database superuser [Was: Re:\n pg_amcheck option to install extension]" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> On Apr 20, 2021, at 5:54 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n>> On Tue, Apr 20, 2021 at 1:31 AM Mark Dilger\n>> <mark.dilger@enterprisedb.com> wrote:\n>>> I think you are conflating the concept of an operating system adminstrator with the concept of the database superuser/owner.\n\n>> You should conflate those things, because there's no meaningful\n>> privilege boundary between them:\n\n> I understand why you say so, but I think the situation is more nuanced than that.\n\nMaybe I too am confused, but I understand \"operating system administrator\"\nto mean \"somebody who has root, or some elevated OS privilege level, on\nthe box running Postgres\". That is 100% distinct from the operating\nsystem user that runs Postgres, which should generally be a bog-standard\nOS user. (In fact, we try to prevent you from running Postgres as root.)\n\nWhat there is not a meaningful privilege boundary between is that standard\nOS user and a within-the-database superuser. Since we allow superusers to\ntrigger file reads and writes, and indeed execute programs, from within\nthe DB, a superuser can surely reach anything the OS user can do.\n\nThe rest of your analysis seems a bit off-point to me, which is what\nmakes me think that one of us is confused. If Alice is storing her\ndata in a Postgres database, she had better trust both the Postgres\nsuperuser and the box's administrators ... otherwise, she should go\nget her own box and her own Postgres installation.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 20 Apr 2021 18:19:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Privilege boundary between sysadmin and database superuser [Was:\n Re: pg_amcheck option to install extension]" }, { "msg_contents": "\n\n> On Apr 20, 2021, at 3:19 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>>> On Apr 20, 2021, at 5:54 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n>>> On Tue, Apr 20, 2021 at 1:31 AM Mark Dilger\n>>> <mark.dilger@enterprisedb.com> wrote:\n>>>> I think you are conflating the concept of an operating system adminstrator with the concept of the database superuser/owner.\n> \n>>> You should conflate those things, because there's no meaningful\n>>> privilege boundary between them:\n> \n>> I understand why you say so, but I think the situation is more nuanced than that.\n> \n> Maybe I too am confused, but I understand \"operating system administrator\"\n> to mean \"somebody who has root, or some elevated OS privilege level, on\n> the box running Postgres\". That is 100% distinct from the operating\n> system user that runs Postgres, which should generally be a bog-standard\n> OS user. (In fact, we try to prevent you from running Postgres as root.)\n> \n> What there is not a meaningful privilege boundary between is that standard\n> OS user and a within-the-database superuser. Since we allow superusers to\n> trigger file reads and writes, and indeed execute programs, from within\n> the DB, a superuser can surely reach anything the OS user can do.\n\nRight. This is the part that Alice might want to restrict, and we don't have an easy way for her to do so.\n\n> The rest of your analysis seems a bit off-point to me, which is what\n> makes me think that one of us is confused. If Alice is storing her\n> data in a Postgres database, she had better trust both the Postgres\n> superuser and the box's administrators ... otherwise, she should go\n> get her own box and her own Postgres installation.\n\nIt is the other way around. Alice is the operating system administrator who doesn't trust Bob. She wants Bob to be able to do any database thing he wants within the PostgreSQL environment, but doesn't want that to leak out as an ability to run arbitrary stuff on the system, not even if it's just stuff running as bog-standard user \"postgres\". In my view, Alice can accomplish this goal using a very tightly designed container, but it is far from easy to do and to get right. \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 20 Apr 2021 17:30:56 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Privilege boundary between sysadmin and database superuser [Was:\n Re: pg_amcheck option to install extension]" }, { "msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Apr 20, 2021, at 3:19 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > The rest of your analysis seems a bit off-point to me, which is what\n> > makes me think that one of us is confused. If Alice is storing her\n> > data in a Postgres database, she had better trust both the Postgres\n> > superuser and the box's administrators ... otherwise, she should go\n> > get her own box and her own Postgres installation.\n> \n> It is the other way around. Alice is the operating system administrator who doesn't trust Bob. She wants Bob to be able to do any database thing he wants within the PostgreSQL environment, but doesn't want that to leak out as an ability to run arbitrary stuff on the system, not even if it's just stuff running as bog-standard user \"postgres\". In my view, Alice can accomplish this goal using a very tightly designed container, but it is far from easy to do and to get right. \n\nThen Bob doesn't get to be a superuser.\n\nThere's certainly some capabilities that aren't able to be GRANT'd out\ntoday and which are reserved for superusers, but there's been ongoing\nwork to improve on that situation (pg_read_all_data being one of the\nrecent improvements in this area, in fact...). Certainly, if others are\ninterested in that then it'd be great to have more folks working to\nimprove the situation.\n\nWe do need to make it clear when a given capability isn't intended to\nallow a user who has that capability to be able to become a superuser\nand when the capability itself means that they would be able to. The\npredefined role 'pg_execute_server_program' grants out the capability to\nexecute programs on the server, which both allows a user to become a\nsuperuser if they wished, and goes against your stated requirement that\nBob isn't allowed to do that, so that predefined role shouldn't be\nGRANT'd to Bob.\n\nThe question is: what do you wish Bob could do, as a non-superuser, that\nBob isn't able to do today? Assuming that there's a set of capabilities\nthere that both wouldn't allow Bob to become a superuser (which implies\nthat Bob can't do things like execute arbitrary programs or read/write\narbitrary files on the server) and which would allow Bob to perform the\nactions you'd like Bob to be able to do, it's mostly a matter of\nprogramming to make it happen...\n\nThanks,\n\nStephen", "msg_date": "Wed, 21 Apr 2021 16:24:35 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Privilege boundary between sysadmin and database superuser [Was:\n Re: pg_amcheck option to install extension]" }, { "msg_contents": "\nOn 4/18/21 5:58 PM, Mark Dilger wrote:\n>\n>> On Apr 18, 2021, at 6:19 AM, Andrew Dunstan <andrew@dunslane.net> wrote:\n>>\n>> how about specifying pg_catalog as the schema instead of public?\n> Done.\n>\n\n\nPushed with slight changes.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 24 Apr 2021 13:53:57 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: pg_amcheck option to install extension" } ]
[ { "msg_contents": "I have a Publication/Subscription. Then 10 days ago my replication just\nstopped, no one interesting message log on both sides. Then select from\npg_stat_replication was empty and some time later, an hour or two,\npg_stat_replication came back showing a record like it should. Well, it´s\nreplicating ? No, no one new record on replica, some more time and\npg_stat_replication was empty again. The only log was created was on\nprimary, initially wal_sender_timeout was 15 minutes. I tried changing it to\n1 hour, maybe is a huge transaction, but no, the problem remains the same.\nWhen I set timeout to 0 on Sender and Receiver, then I see that on replica\npg_stat_activity was DataFileRead on Wait Event Name field and IO on Wait\nEvent Type field, and that status remains the same for hours. Documentation\nsays DataFileRead is Waiting for a read from a relation data file, so it\ndoes not specify which file is being waited. So, there is a problem on some\ntable on my replica but ... what file is this ? Well, it´ll spend a lot of\ntime but lets do a vacuum full. Then it took for 1 or 2 days but found two\ncorrupted tables. Those tables I couldn´t do select or dump, so I just\ndroped them and then finally replication came back.\n\nThis long explaining was only to show, at least for me, that would be\ndesirable to have an additional information when Postgres is waiting for a\nfile. What if DataFileRead showing relfilenode it´s waiting for ?\n\nThese logs are on primary, on replica I didn´t find any. And strangely,\nthose errors did not occur every wal_sender_timeout time, even when was\nconfigured with 15 minutes they occured 2 or 3 times a day only.\n\nlog_time;user_name;session_start_time;error_severity;message\n2021-04-06 16:21:23.385;replicate;2021-04-06 16:21:23.000;LOG;starting\nlogical decoding for slot sub_google_bkp\n2021-04-06 16:21:23.386;replicate;2021-04-06 16:21:23.000;LOG;logical\ndecoding found consistent point at 5C1/5C7A3D48\n2021-04-06 16:36:24.175;replicate;2021-04-06 16:21:23.000;LOG;terminating\nwalsender process due to replication timeout\n\n2021-04-06 17:47:15.744;replicate;2021-04-06 17:47:15.000;LOG;starting\nlogical decoding for slot sub_google_bkp\n2021-04-06 17:47:15.745;replicate;2021-04-06 17:47:15.000;LOG;logical\ndecoding found consistent point at 5C1/5CBFBF38\n2021-04-06 18:02:15.757;replicate;2021-04-06 17:47:15.000;LOG;terminating\nwalsender process due to replication timeout\n\n2021-04-07 11:59:27.831;replicate;2021-04-07 11:59:27.000;LOG;starting\nlogical decoding for slot sub_google_bkp\n2021-04-07 11:59:27.832;replicate;2021-04-07 11:59:27.000;LOG;logical\ndecoding found consistent point at 5C1/5CBFBF38\n2021-04-07 12:14:27.867;replicate;2021-04-07 11:59:27.000;LOG;terminating\nwalsender process due to replication timeout\n\n2021-04-07 21:45:22.230;replicate;2021-04-07 21:45:22.000;LOG;starting\nlogical decoding for slot sub_google_bkp\n2021-04-07 21:45:22.231;replicate;2021-04-07 21:45:22.000;LOG;logical\ndecoding found consistent point at 5C1/5CBFD438\n2021-04-07 22:45:22.586;replicate;2021-04-07 21:45:22.000;LOG;terminating\nwalsender process due to replication timeout\n\n2021-04-08 00:06:26.253;replicate;2021-04-08 00:06:25.000;LOG;starting\nlogical decoding for slot sub_google_bkp\n2021-04-08 00:06:26.255;replicate;2021-04-08 00:06:25.000;LOG;logical\ndecoding found consistent point at 5C1/5CCF7E20\n2021-04-08 02:15:10.342;replicate;2021-04-08 00:06:25.000;LOG;terminating\nwalsender process due to replication timeout\n\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n", "msg_date": "Fri, 16 Apr 2021 11:14:19 -0700 (MST)", "msg_from": "PegoraroF10 <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "More info on pg_stat_activity Wait Event Name when is DataFileRead" }, { "msg_contents": "On Sat, Apr 17, 2021 at 1:58 PM PegoraroF10 <marcos@f10.com.br> wrote:\n> This long explaining was only to show, at least for me, that would be\n> desirable to have an additional information when Postgres is waiting for a\n> file. What if DataFileRead showing relfilenode it´s waiting for ?\n\nI agree that this would be nice, but it's pretty much impossible to do\nit without adding quite a bit more overhead than the current system\nhas. And it already has enough overhead to make Andres at least\nslightly grumpy, though admittedly a lot of things have enough\noverhead to make Andres grumpy, because he REALLY likes it when things\ngo fast. :-)\n\nI suspect it's best to investigate problems like the one you're having\nusing a tool like strace, which can provide way more detail than we\ncould ever cram into a wait event, like the data actually read or\nwritten, timestamps for every operation, etc. But I also kind of\nwonder whether it really matters. If your system is getting stuck in a\nDataFileRead event for a long period of time, and assuming that's for\nreal and not just some kind of reporting bug, it sounds a lot like you\nhave a bad disk or a severely overloaded I/O subsystem. Because what\nelse would lead to the system getting stuck inside read() for a long\ntime?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 18 Apr 2021 15:18:33 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: More info on pg_stat_activity Wait Event Name when is\n DataFileRead" }, { "msg_contents": "I´m sure problem was hardware and I hope it does not occur anymore.\nIf I have a logical replication and on replica I do a Vacuum Full, Cluster\nor any other EXCLUSIVE LOCK operation which, replication will wait for that. \nI was thinking was about a time to release that lock, or in my situation a\nhardware problem. If N seconds \n later and that file is not released then change DataFileRead to\nDataFileRead + relfilenode \n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n", "msg_date": "Mon, 19 Apr 2021 09:01:06 -0700 (MST)", "msg_from": "PegoraroF10 <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "Re: More info on pg_stat_activity Wait Event Name when is\n DataFileRead" }, { "msg_contents": "On Mon, Apr 19, 2021 at 12:17 PM PegoraroF10 <marcos@f10.com.br> wrote:\n> I´m sure problem was hardware and I hope it does not occur anymore.\n> If I have a logical replication and on replica I do a Vacuum Full, Cluster\n> or any other EXCLUSIVE LOCK operation which, replication will wait for that.\n> I was thinking was about a time to release that lock, or in my situation a\n> hardware problem. If N seconds\n> later and that file is not released then change DataFileRead to\n> DataFileRead + relfilenode\n\nBut how would we implement that with reasonable efficiency? If we\ncalled setitimer() before every read() call to set the timeout, and\nthen again to clear it after the read(), that would probably be\nhideously expensive. Perhaps it would work to have a background\n\"heartbeat\" process that pings every backend in the system every 1s or\nsomething like that, and make the signal handler do this, but that\nsupposes that the signal handler would have ready access to this\ninformation, which doesn't seem totally straightforward to arrange,\nand that it would be OK for the signal handler to grab a lock to\nupdate shared memory, which as things stand today is definitely not\nsafe.\n\nI am not trying to say that there is no way that something like this\ncould be made to work. There's probably something that can be done. I\ndon't think I know what that thing is, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 19 Apr 2021 12:57:57 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: More info on pg_stat_activity Wait Event Name when is\n DataFileRead" } ]
[ { "msg_contents": "Hi,\n\nThis patch is a WIP fix for the issue described in [1], where the\nplanner picks a more expensive plan with partition-wise joins enabled,\nand disabling this option produces a cheaper plan. That's a bit strange\nbecause with the option disabled we consider *fewer* plans, so we should\nnot be able to generate a cheaper plan.\n\nThe problem lies in generate_orderedappend_paths(), which builds two\ntypes of append paths - with minimal startup cost, and with minimal\ntotal cost. That however does not work for queries with LIMIT, which\nalso need to consider at fractional cost, but the path interesting from\nthis perspective may be eliminated by other paths.\n\nConsider three paths (this comes from the reproducer shared in [1]):\n\n A: nestloop_path startup 0.585000 total 35708.292500\n B: nestloop_path startup 0.292500 total 150004297.292500\n C: mergejoin_path startup 9748.112737 total 14102.092737\n\nWith some reasonable LIMIT value (e.g. 5% of the data), we really want\nto pick path A. But that path is dominated both in startup cost (by B)\nand total cost (path C). Hence generate_orderedappend_paths() will\nignore it, and we'll generate a more expensive LIMIT plan.\n\nIn [2] Tom proposed to modify generate_orderedappend_paths() to also\nconsider the fractional cheapest_path, just like we do for startup and\ntotal costs. The attached patch does exactly that, and it does seem to\ndo the trick.\n\nThere are some loose ends, though:\n\n1) If get_cheapest_fractional_path_for_pathkeys returns NULL, it's not\nclear whether to default to cheapest_startup or cheapest_total. We might\nalso consider an incremental sort, in which case the startup cost\nmatters (for Sort it does not). So we may need an enhanced version of\nget_cheapest_fractional_path_for_pathkeys that generates such paths.\n\n2) Same for the cheapest_total - maybe there's a partially sorted path,\nand using it with incremental sort on top would be better than using\ncheapest_total_path + sort.\n\n3) Not sure if get_cheapest_fractional_path_for_pathkeys should worry\nabout require_parallel_safe, just like the other functions nearby.\n\nI'd argue that this patch does not need to add the Incremental Sort\ncapabilities in (1) and (2) - it's just another place where we decided\nnot to consider this sort variant yet.\n\nI'm not sure how much this depends on partition-wise join, and why\ndisabling it generates the right plan. The reproducer uses that, but\nAFAICS generate_orderedappend_paths() works like this since 2010 (commit\n11cad29c915). I'd bet the issue exists since then and it's possible to\nget similar cases even without partition-wise join.\n\nI can reproduce it on PostgreSQL 12, though (which however supports\npartition-wise join).\n\nNot sure whether this should be backpatched. We didn't get any reports\nuntil now, so it doesn't seem like a pressing issue. OTOH most users\nwon't actually notice this, they'll just get worse plans without\nrealizing there's a better option.\n\n\nregards\n\n[1]\nhttps://www.postgresql.org/message-id/011937a3-7427-b99f-13f1-c07a127cf94c%40enterprisedb.com\n\n[2] https://www.postgresql.org/message-id/4006636.1618577893%40sss.pgh.pa.us\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sat, 17 Apr 2021 01:52:19 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "PATCH: generate fractional cheapest paths in\n generate_orderedappend_path" }, { "msg_contents": "Hi,\n\n\nthanks for looking into it!\n\n\nFor some reason the patch doesn't apply at my end, could you repost one based at the master?\n\n\n> 1) If get_cheapest_fractional_path_for_pathkeys returns NULL, it's not\n> clear whether to default to cheapest_startup or cheapest_total. We might\n> also consider an incremental sort, in which case the startup cost\n> matters (for Sort it does not). So we may need an enhanced version of\n> get_cheapest_fractional_path_for_pathkeys that generates such paths.\n>\n> 2) Same for the cheapest_total - maybe there's a partially sorted path,\n> and using it with incremental sort on top would be better than using\n> cheapest_total_path + sort.\n\n> I'd argue that this patch does not need to add the Incremental Sort\n> capabilities in (1) and (2) - it's just another place where we decided\n> not to consider this sort variant yet.\n\n\nI'd say your reasoning is sound. If I'd want to get better partial costs for incremental sorts, I'd look at get_cheapest_fractional_path first. That sounds more important than generate_orderedappend_paths. Either way I'd say that is a completely separate issue and I think that should be looked at separately.\n\n\n>3) Not sure if get_cheapest_fractional_path_for_pathkeys should worry\n\n> about require_parallel_safe, just like the other functions nearby.\n\nI think it should. We have a ParallelAppend node after all.\nI'm not really familiar with the way get_cheapest_fractional_path_for_pathkeys is used, but a quick search suggests to me, that build_minmax_path was thus far the only one using it. And minmax paths are never parallel safe anyway. I think that is the reason it doesn't do that already.\n\n\nRegards\n\nArne\n\n________________________________\nFrom: Tomas Vondra <tomas.vondra@enterprisedb.com>\nSent: Saturday, April 17, 2021 1:52:19 AM\nTo: pgsql-hackers\nSubject: PATCH: generate fractional cheapest paths in generate_orderedappend_path\n\nHi,\n\nThis patch is a WIP fix for the issue described in [1], where the\nplanner picks a more expensive plan with partition-wise joins enabled,\nand disabling this option produces a cheaper plan. That's a bit strange\nbecause with the option disabled we consider *fewer* plans, so we should\nnot be able to generate a cheaper plan.\n\nThe problem lies in generate_orderedappend_paths(), which builds two\ntypes of append paths - with minimal startup cost, and with minimal\ntotal cost. That however does not work for queries with LIMIT, which\nalso need to consider at fractional cost, but the path interesting from\nthis perspective may be eliminated by other paths.\n\nConsider three paths (this comes from the reproducer shared in [1]):\n\n A: nestloop_path startup 0.585000 total 35708.292500\n B: nestloop_path startup 0.292500 total 150004297.292500\n C: mergejoin_path startup 9748.112737 total 14102.092737\n\nWith some reasonable LIMIT value (e.g. 5% of the data), we really want\nto pick path A. But that path is dominated both in startup cost (by B)\nand total cost (path C). Hence generate_orderedappend_paths() will\nignore it, and we'll generate a more expensive LIMIT plan.\n\nIn [2] Tom proposed to modify generate_orderedappend_paths() to also\nconsider the fractional cheapest_path, just like we do for startup and\ntotal costs. The attached patch does exactly that, and it does seem to\ndo the trick.\n\nThere are some loose ends, though:\n\n1) If get_cheapest_fractional_path_for_pathkeys returns NULL, it's not\nclear whether to default to cheapest_startup or cheapest_total. We might\nalso consider an incremental sort, in which case the startup cost\nmatters (for Sort it does not). So we may need an enhanced version of\nget_cheapest_fractional_path_for_pathkeys that generates such paths.\n\n2) Same for the cheapest_total - maybe there's a partially sorted path,\nand using it with incremental sort on top would be better than using\ncheapest_total_path + sort.\n\n3) Not sure if get_cheapest_fractional_path_for_pathkeys should worry\nabout require_parallel_safe, just like the other functions nearby.\n\nI'd argue that this patch does not need to add the Incremental Sort\ncapabilities in (1) and (2) - it's just another place where we decided\nnot to consider this sort variant yet.\n\nI'm not sure how much this depends on partition-wise join, and why\ndisabling it generates the right plan. The reproducer uses that, but\nAFAICS generate_orderedappend_paths() works like this since 2010 (commit\n11cad29c915). I'd bet the issue exists since then and it's possible to\nget similar cases even without partition-wise join.\n\nI can reproduce it on PostgreSQL 12, though (which however supports\npartition-wise join).\n\nNot sure whether this should be backpatched. We didn't get any reports\nuntil now, so it doesn't seem like a pressing issue. OTOH most users\nwon't actually notice this, they'll just get worse plans without\nrealizing there's a better option.\n\n\nregards\n\n[1]\nhttps://www.postgresql.org/message-id/011937a3-7427-b99f-13f1-c07a127cf94c%40enterprisedb.com\n\n[2] https://www.postgresql.org/message-id/4006636.1618577893%40sss.pgh.pa.us\n\n--\nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\n\n\n\n\n\nHi,\n\n\nthanks for looking into it!\n\n\nFor some reason the patch doesn't apply at my end, could you repost one based at the master?\n\n\n\n\n> 1) If get_cheapest_fractional_path_for_pathkeys returns NULL, it's not\n> clear whether to default to cheapest_startup or cheapest_total. We might\n> also consider an incremental sort, in which case the startup cost\n> matters (for Sort it does not). So we may need an enhanced version of\n> get_cheapest_fractional_path_for_pathkeys that generates such paths.\n> \n> 2) Same for the cheapest_total - maybe there's a partially sorted path,\n> and using it with incremental sort on top would be better than using\n> cheapest_total_path + sort.\n\n\n\n> I'd argue that this patch does not need to add the Incremental Sort\n> capabilities in (1) and (2) - it's just another place where we decided\n> not to consider this sort variant yet.\n\n\n\n\n\nI'd say your reasoning is sound. If I'd want to get better partial costs for incremental sorts, I'd look at\nget_cheapest_fractional_path first. That sounds more important than \ngenerate_orderedappend_paths. Either way I'd say that is a completely separate issue and I think that should be looked at separately.\n\n\n\n\n>3) Not sure if get_cheapest_fractional_path_for_pathkeys should worry\n\n> about require_parallel_safe, just like the other functions nearby.\n\n\nI think it should. We have a ParallelAppend node after all.\nI'm not really familiar with the way get_cheapest_fractional_path_for_pathkeys is used, but a quick search suggests to me, that\nbuild_minmax_path was thus far the only one using it. And minmax paths are never parallel safe anyway. I think that is the reason it doesn't do that already.\n\n\n\n\nRegards\nArne\n\n\n\nFrom: Tomas Vondra <tomas.vondra@enterprisedb.com>\nSent: Saturday, April 17, 2021 1:52:19 AM\nTo: pgsql-hackers\nSubject: PATCH: generate fractional cheapest paths in generate_orderedappend_path\n \n\n\n\nHi,\n\nThis patch is a WIP fix for the issue described in [1], where the\nplanner picks a more expensive plan with partition-wise joins enabled,\nand disabling this option produces a cheaper plan. That's a bit strange\nbecause with the option disabled we consider *fewer* plans, so we should\nnot be able to generate a cheaper plan.\n\nThe problem lies in generate_orderedappend_paths(), which builds two\ntypes of append paths - with minimal startup cost, and with minimal\ntotal cost. That however does not work for queries with LIMIT, which\nalso need to consider at fractional cost, but the path interesting from\nthis perspective may be eliminated by other paths.\n\nConsider three paths (this comes from the reproducer shared in [1]):\n\n  A: nestloop_path   startup 0.585000    total 35708.292500\n  B: nestloop_path   startup 0.292500    total 150004297.292500\n  C: mergejoin_path  startup 9748.112737 total 14102.092737\n\nWith some reasonable LIMIT value (e.g. 5% of the data), we really want\nto pick path A. But that path is dominated both in startup cost (by B)\nand total cost (path C). Hence generate_orderedappend_paths() will\nignore it, and we'll generate a more expensive LIMIT plan.\n\nIn [2] Tom proposed to modify generate_orderedappend_paths() to also\nconsider the fractional cheapest_path, just like we do for startup and\ntotal costs. The attached patch does exactly that, and it does seem to\ndo the trick.\n\nThere are some loose ends, though:\n\n1) If get_cheapest_fractional_path_for_pathkeys returns NULL, it's not\nclear whether to default to cheapest_startup or cheapest_total. We might\nalso consider an incremental sort, in which case the startup cost\nmatters (for Sort it does not). So we may need an enhanced version of\nget_cheapest_fractional_path_for_pathkeys that generates such paths.\n\n2) Same for the cheapest_total - maybe there's a partially sorted path,\nand using it with incremental sort on top would be better than using\ncheapest_total_path + sort.\n\n3) Not sure if get_cheapest_fractional_path_for_pathkeys should worry\nabout require_parallel_safe, just like the other functions nearby.\n\nI'd argue that this patch does not need to add the Incremental Sort\ncapabilities in (1) and (2) - it's just another place where we decided\nnot to consider this sort variant yet.\n\nI'm not sure how much this depends on partition-wise join, and why\ndisabling it generates the right plan. The reproducer uses that, but\nAFAICS generate_orderedappend_paths() works like this since 2010 (commit\n11cad29c915). I'd bet the issue exists since then and it's possible to\nget similar cases even without partition-wise join.\n\nI can reproduce it on PostgreSQL 12, though (which however supports\npartition-wise join).\n\nNot sure whether this should be backpatched. We didn't get any reports\nuntil now, so it doesn't seem like a pressing issue. OTOH most users\nwon't actually notice this, they'll just get worse plans without\nrealizing there's a better option.\n\n\nregards\n\n[1]\nhttps://www.postgresql.org/message-id/011937a3-7427-b99f-13f1-c07a127cf94c%40enterprisedb.com\n\n[2] \nhttps://www.postgresql.org/message-id/4006636.1618577893%40sss.pgh.pa.us\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 26 Apr 2021 11:00:11 +0000", "msg_from": "Arne Roland <A.Roland@index.de>", "msg_from_op": false, "msg_subject": "Re: PATCH: generate fractional cheapest paths in\n generate_orderedappend_path" }, { "msg_contents": "Hi,\n\n\nI haven't tested the parallel case, but I think we should sort out (3) get_cheapest_fractional_path_for_pathkeys as mentioned above.\n\n\nI am lost about the comment regarding startup_new_fractional. Could you elaborate what you mean by that?\n\n\nApart from that, I'd argue for a small test case. I attached a slimmed down case of what we were trying to fix. It might be worth to integrate that with an existing test, since more than a third of the time seems to be consumed by the creation and attachment of partitions.\n\n\nRegards\nArne", "msg_date": "Thu, 3 Jun 2021 17:17:55 +0000", "msg_from": "Arne Roland <A.Roland@index.de>", "msg_from_op": false, "msg_subject": "Re: PATCH: generate fractional cheapest paths in\n generate_orderedappend_path" }, { "msg_contents": "Hi,\n\nOn 6/3/21 7:17 PM, Arne Roland wrote:\n> Hi,\n> \n> \n> I haven't tested the parallel case, but I think we should sort out (3)\n> get_cheapest_fractional_path_for_pathkeys as mentioned above.\n> \n\nNot sure what you refer to by \"above\" - it's probably better to reply\nin-line to existing message, which makes it much cleared.\n\n> \n> I am lost about the comment regarding startup_new_fractional. Could you\n> elaborate what you mean by that?\n> \n\nNot sure what this refers to either - there's no startup_new_fractional\nin my message and 'git grep startup_new_fractional' returns nothing.\n\n> \n> Apart from that, I'd argue for a small test case. I attached a slimmed\n> down case of what we were trying to fix. It might be worth to integrate\n> that with an existing test, since more than a third of the time seems to\n> be consumed by the creation and attachment of partitions.\n> \n\nMaybe, if there's a suitable table to reuse, we can do that. But I don't\nthink it matters it takes ~1/3 of the time to attach the partitions.\nWhat's more important is whether it measurably slows down the test\nsuite, and I don't think that's an issue.\n\nIn any case, this seems a bit premature - we need something to test the\npatch etc. We can worry about how expensive the test is much later.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 3 Jun 2021 20:11:48 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: generate fractional cheapest paths in\n generate_orderedappend_path" }, { "msg_contents": "On Thu, Jun 3, 2021 at 11:12 AM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> Hi,\n>\n> On 6/3/21 7:17 PM, Arne Roland wrote:\n> > Hi,\n> >\n> >\n> > I haven't tested the parallel case, but I think we should sort out (3)\n> > get_cheapest_fractional_path_for_pathkeys as mentioned above.\n> >\n>\n> Not sure what you refer to by \"above\" - it's probably better to reply\n> in-line to existing message, which makes it much cleared.\n>\n> >\n> > I am lost about the comment regarding startup_new_fractional. Could you\n> > elaborate what you mean by that?\n> >\n>\n> Not sure what this refers to either - there's no startup_new_fractional\n> in my message and 'git grep startup_new_fractional' returns nothing.\n>\n> >\n> > Apart from that, I'd argue for a small test case. I attached a slimmed\n> > down case of what we were trying to fix. It might be worth to integrate\n> > that with an existing test, since more than a third of the time seems to\n> > be consumed by the creation and attachment of partitions.\n> >\n>\n> Maybe, if there's a suitable table to reuse, we can do that. But I don't\n> think it matters it takes ~1/3 of the time to attach the partitions.\n> What's more important is whether it measurably slows down the test\n> suite, and I don't think that's an issue.\n>\n> In any case, this seems a bit premature - we need something to test the\n> patch etc. We can worry about how expensive the test is much later.\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n> Hi,\nIn REL_11_STABLE branch, a search revealed the following:\n\nsrc/backend/optimizer/path/pathkeys.c: *\nget_cheapest_fractional_path_for_pathkeys\nsrc/backend/optimizer/path/pathkeys.c:get_cheapest_fractional_path_for_pathkeys(List\n*paths,\nsrc/backend/optimizer/plan/planagg.c:\nget_cheapest_fractional_path_for_pathkeys(final_rel->pathlist,\nsrc/include/optimizer/paths.h:extern Path\n*get_cheapest_fractional_path_for_pathkeys(List *paths,\n\nIt seems this function has been refactored out in subsequent releases.\n\nFYI\n\nOn Thu, Jun 3, 2021 at 11:12 AM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:Hi,\n\nOn 6/3/21 7:17 PM, Arne Roland wrote:\n> Hi,\n> \n> \n> I haven't tested the parallel case, but I think we should sort out (3)\n> get_cheapest_fractional_path_for_pathkeys as mentioned above.\n> \n\nNot sure what you refer to by \"above\" - it's probably better to reply\nin-line to existing message, which makes it much cleared.\n\n> \n> I am lost about the comment regarding startup_new_fractional. Could you\n> elaborate what you mean by that?\n> \n\nNot sure what this refers to either - there's no startup_new_fractional\nin my message and 'git grep startup_new_fractional' returns nothing.\n\n> \n> Apart from that, I'd argue for a small test case. I attached a slimmed\n> down case of what we were trying to fix. It might be worth to integrate\n> that with an existing test, since more than a third of the time seems to\n> be consumed by the creation and attachment of partitions.\n> \n\nMaybe, if there's a suitable table to reuse, we can do that. But I don't\nthink it matters it takes ~1/3 of the time to attach the partitions.\nWhat's more important is whether it measurably slows down the test\nsuite, and I don't think that's an issue.\n\nIn any case, this seems a bit premature - we need something to test the\npatch etc. We can worry about how expensive the test is much later.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nHi,In REL_11_STABLE branch, a search revealed the following:src/backend/optimizer/path/pathkeys.c: * get_cheapest_fractional_path_for_pathkeyssrc/backend/optimizer/path/pathkeys.c:get_cheapest_fractional_path_for_pathkeys(List *paths,src/backend/optimizer/plan/planagg.c:           get_cheapest_fractional_path_for_pathkeys(final_rel->pathlist,src/include/optimizer/paths.h:extern Path *get_cheapest_fractional_path_for_pathkeys(List *paths, It seems this function has been refactored out in subsequent releases.FYI", "msg_date": "Thu, 3 Jun 2021 13:50:46 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: generate fractional cheapest paths in\n generate_orderedappend_path" }, { "msg_contents": "On Thu, Jun 3, 2021 at 1:50 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Thu, Jun 3, 2021 at 11:12 AM Tomas Vondra <\n> tomas.vondra@enterprisedb.com> wrote:\n>\n>> Hi,\n>>\n>> On 6/3/21 7:17 PM, Arne Roland wrote:\n>> > Hi,\n>> >\n>> >\n>> > I haven't tested the parallel case, but I think we should sort out (3)\n>> > get_cheapest_fractional_path_for_pathkeys as mentioned above.\n>> >\n>>\n>> Not sure what you refer to by \"above\" - it's probably better to reply\n>> in-line to existing message, which makes it much cleared.\n>>\n>> >\n>> > I am lost about the comment regarding startup_new_fractional. Could you\n>> > elaborate what you mean by that?\n>> >\n>>\n>> Not sure what this refers to either - there's no startup_new_fractional\n>> in my message and 'git grep startup_new_fractional' returns nothing.\n>>\n>> >\n>> > Apart from that, I'd argue for a small test case. I attached a slimmed\n>> > down case of what we were trying to fix. It might be worth to integrate\n>> > that with an existing test, since more than a third of the time seems to\n>> > be consumed by the creation and attachment of partitions.\n>> >\n>>\n>> Maybe, if there's a suitable table to reuse, we can do that. But I don't\n>> think it matters it takes ~1/3 of the time to attach the partitions.\n>> What's more important is whether it measurably slows down the test\n>> suite, and I don't think that's an issue.\n>>\n>> In any case, this seems a bit premature - we need something to test the\n>> patch etc. We can worry about how expensive the test is much later.\n>>\n>> regards\n>>\n>> --\n>> Tomas Vondra\n>> EnterpriseDB: http://www.enterprisedb.com\n>> The Enterprise PostgreSQL Company\n>>\n>>\n>> Hi,\n> In REL_11_STABLE branch, a search revealed the following:\n>\n> src/backend/optimizer/path/pathkeys.c: *\n> get_cheapest_fractional_path_for_pathkeys\n> src/backend/optimizer/path/pathkeys.c:get_cheapest_fractional_path_for_pathkeys(List\n> *paths,\n> src/backend/optimizer/plan/planagg.c:\n> get_cheapest_fractional_path_for_pathkeys(final_rel->pathlist,\n> src/include/optimizer/paths.h:extern Path\n> *get_cheapest_fractional_path_for_pathkeys(List *paths,\n>\n> It seems this function has been refactored out in subsequent releases.\n>\n> FYI\n>\n\nSent a bit too soon.\n\nThe above function still exists.\nBut startup_new_fractional was nowhere to be found.\n\nOn Thu, Jun 3, 2021 at 1:50 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Thu, Jun 3, 2021 at 11:12 AM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:Hi,\n\nOn 6/3/21 7:17 PM, Arne Roland wrote:\n> Hi,\n> \n> \n> I haven't tested the parallel case, but I think we should sort out (3)\n> get_cheapest_fractional_path_for_pathkeys as mentioned above.\n> \n\nNot sure what you refer to by \"above\" - it's probably better to reply\nin-line to existing message, which makes it much cleared.\n\n> \n> I am lost about the comment regarding startup_new_fractional. Could you\n> elaborate what you mean by that?\n> \n\nNot sure what this refers to either - there's no startup_new_fractional\nin my message and 'git grep startup_new_fractional' returns nothing.\n\n> \n> Apart from that, I'd argue for a small test case. I attached a slimmed\n> down case of what we were trying to fix. It might be worth to integrate\n> that with an existing test, since more than a third of the time seems to\n> be consumed by the creation and attachment of partitions.\n> \n\nMaybe, if there's a suitable table to reuse, we can do that. But I don't\nthink it matters it takes ~1/3 of the time to attach the partitions.\nWhat's more important is whether it measurably slows down the test\nsuite, and I don't think that's an issue.\n\nIn any case, this seems a bit premature - we need something to test the\npatch etc. We can worry about how expensive the test is much later.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nHi,In REL_11_STABLE branch, a search revealed the following:src/backend/optimizer/path/pathkeys.c: * get_cheapest_fractional_path_for_pathkeyssrc/backend/optimizer/path/pathkeys.c:get_cheapest_fractional_path_for_pathkeys(List *paths,src/backend/optimizer/plan/planagg.c:           get_cheapest_fractional_path_for_pathkeys(final_rel->pathlist,src/include/optimizer/paths.h:extern Path *get_cheapest_fractional_path_for_pathkeys(List *paths, It seems this function has been refactored out in subsequent releases.FYISent a bit too soon.The above function still exists.But startup_new_fractional was nowhere to be found.", "msg_date": "Thu, 3 Jun 2021 13:52:12 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: generate fractional cheapest paths in\n generate_orderedappend_path" }, { "msg_contents": "On 6/3/21 10:52 PM, Zhihong Yu wrote:\n> ...\n> \n> Sent a bit too soon.\n> \n> The above function still exists.\n> But startup_new_fractional was nowhere to be found.\n\nActually, there are two comments\n\n\t/* XXX maybe we should have startup_new_fractional? */\n\nin the patch I posted - I completely forgot about that. But I think\nthat's a typo, I think - it should be\n\n\t/* XXX maybe we should have startup_neq_fractional? */\n\nand the new flag would work similarly to startup_neq_total, i.e. it's\npointless to add paths where startup == fractional cost.\n\nAt least I think that was the idea when I wrote the patch, it way too\nlong ago.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 3 Jun 2021 22:57:38 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: generate fractional cheapest paths in\n generate_orderedappend_path" }, { "msg_contents": "Hi,\n\n\nthanks for the quick reply!\n\n\nFrom: Tomas Vondra <tomas.vondra@enterprisedb.com>\nSent: Thursday, June 3, 2021 20:11\nTo: Arne Roland; pgsql-hackers\nSubject: Re: PATCH: generate fractional cheapest paths in generate_orderedappend_path\n\n> I haven't tested the parallel case, but I think we should sort out (3)\n> get_cheapest_fractional_path_for_pathkeys as mentioned above.\n>\n\nNot sure what you refer to by \"above\" - it's probably better to reply\nin-line to existing message, which makes it much cleared.\n\n\nI was referring to one message above. I thought the thread was still short enough. Apparently to much time has passed. Sorry, I hope this mail is better. I was referring to my post from April:\n________________________________\nFrom: Arne Roland\nSent: Monday, April 26, 2021 13:00\nTo: Tomas Vondra; pgsql-hackers\nSubject: Re: PATCH: generate fractional cheapest paths in generate_orderedappend_path\n\n\n>3) Not sure if get_cheapest_fractional_path_for_pathkeys should worry\n\n> about require_parallel_safe, just like the other functions nearby.\n\nI think it should. We have a ParallelAppend node after all.\nI'm not really familiar with the way get_cheapest_fractional_path_for_pathkeys is used, but a quick search suggests to me, that build_minmax_path was thus far the only one using it. And minmax paths are never parallel safe anyway. I think that is the reason it doesn't do that already.\n\n________________________________\nFrom: Zhihong Yu <zyu@yugabyte.com>\nSent: Thursday, June 3, 2021 22:50\nTo: Tomas Vondra\nCc: Arne Roland; pgsql-hackers\nSubject: Re: PATCH: generate fractional cheapest paths in generate_orderedappend_path\n\n\nHi,\nIn REL_11_STABLE branch, a search revealed the following:\n\nsrc/backend/optimizer/path/pathkeys.c: * get_cheapest_fractional_path_for_pathkeys\nsrc/backend/optimizer/path/pathkeys.c:get_cheapest_fractional_path_for_pathkeys(List *paths,\nsrc/backend/optimizer/plan/planagg.c: get_cheapest_fractional_path_for_pathkeys(final_rel->pathlist,\nsrc/include/optimizer/paths.h:extern Path *get_cheapest_fractional_path_for_pathkeys(List *paths,\n\nIt seems this function has been refactored out in subsequent releases.\n\nFYI\n\nThanks for the info!\nI doubt there is any interest to back patch this anywhere. My most ambitious dream would be getting this into pg 14.\n\nI think, we only care about a parallel safety aware variant anyways, which afaict never existed.\n\n________________________________\nFrom: Tomas Vondra <tomas.vondra@enterprisedb.com>\nSent: Thursday, June 3, 2021 22:57\nTo: Zhihong Yu\nCc: Arne Roland; pgsql-hackers\nSubject: Re: PATCH: generate fractional cheapest paths in generate_orderedappend_path\nActually, there are two comments\n\n /* XXX maybe we should have startup_new_fractional? */\n\nin the patch I posted - I completely forgot about that. But I think\nthat's a typo, I think - it should be\n\n /* XXX maybe we should have startup_neq_fractional? */\n\nand the new flag would work similarly to startup_neq_total, i.e. it's\npointless to add paths where startup == fractional cost.\n\nAt least I think that was the idea when I wrote the patch, it way too\nlong ago.\n\nSorry, I almost forgot about this myself. I only got reminded upon seeing that again with different queries/tables.\nJust to be sure I get this correctly: You mean startup_gt_fractional (cost) as an additional condition, right?\n\nRegards\nArne\n\n\n\n\n\n\n\n\n\n\nHi,\n\n\nthanks for the quick reply!\n\n\nFrom: Tomas Vondra <tomas.vondra@enterprisedb.com>\nSent: Thursday, June 3, 2021 20:11\nTo: Arne Roland; pgsql-hackers\nSubject: Re: PATCH: generate fractional cheapest paths in generate_orderedappend_path\n\n> I haven't tested the parallel case, but I think we should sort out (3)\n> get_cheapest_fractional_path_for_pathkeys as mentioned above.\n> \n\nNot sure what you refer to by \"above\" - it's probably better to reply\nin-line to existing message, which makes it much cleared.\n\n\nI was referring to one message above. I thought the thread was still short enough. Apparently to much time has passed.\nSorry, I hope this mail is better. I was referring to my post from April:\n\n\nFrom: Arne Roland\nSent: Monday, April 26, 2021 13:00\nTo: Tomas Vondra; pgsql-hackers\nSubject: Re: PATCH: generate fractional cheapest paths in generate_orderedappend_path\n\n\n\n\n\n>3) Not sure if get_cheapest_fractional_path_for_pathkeys should worry\n\n> about require_parallel_safe, just like the other functions nearby.\n\n\nI think it should. We have a ParallelAppend node after all.\nI'm not really familiar with the way get_cheapest_fractional_path_for_pathkeys is used, but a quick search suggests to me, that\nbuild_minmax_path was thus far the only one using it. And minmax paths are never parallel safe anyway. I think that is the reason it doesn't do that already.\n\n\n\n\nFrom: Zhihong Yu <zyu@yugabyte.com>\nSent: Thursday, June 3, 2021 22:50\nTo: Tomas Vondra\nCc: Arne Roland; pgsql-hackers\nSubject: Re: PATCH: generate fractional cheapest paths in generate_orderedappend_path\n \n\n\n\n\nHi,\nIn REL_11_STABLE branch, a search revealed the following:\n\n\nsrc/backend/optimizer/path/pathkeys.c: * get_cheapest_fractional_path_for_pathkeys\nsrc/backend/optimizer/path/pathkeys.c:get_cheapest_fractional_path_for_pathkeys(List *paths,\nsrc/backend/optimizer/plan/planagg.c:           get_cheapest_fractional_path_for_pathkeys(final_rel->pathlist,\nsrc/include/optimizer/paths.h:extern Path *get_cheapest_fractional_path_for_pathkeys(List *paths, \n\n\nIt seems this function has been refactored out in subsequent releases.\n\n\nFYI\n\n\nThanks for the info!\nI doubt there is any interest to back patch this anywhere. My most ambitious dream would be getting this into pg 14.\n\n\nI think, \nwe only care about a parallel safety aware variant anyways, which afaict never existed.\n\n\n\n\n\n\nFrom: Tomas Vondra <tomas.vondra@enterprisedb.com>\nSent: Thursday, June 3, 2021 22:57\nTo: Zhihong Yu\nCc: Arne Roland; pgsql-hackers\nSubject: Re: PATCH: generate fractional cheapest paths in generate_orderedappend_path\nActually, there are two comments\n\n        /* XXX maybe we should have startup_new_fractional? */\n\nin the patch I posted - I completely forgot about that. But I think\nthat's a typo, I think - it should be\n\n        /* XXX maybe we should have startup_neq_fractional? */\n\nand the new flag would work similarly to startup_neq_total, i.e. it's\npointless to add paths where startup == fractional cost.\n\nAt least I think that was the idea when I wrote the patch, it way too\nlong ago.\n\n\n\nSorry, I almost forgot about this myself. I only got reminded upon seeing that again with different queries/tables.\nJust to be sure I get this correctly: You mean\nstartup_gt_fractional (cost)\nas an additional condition, right?\n\n\nRegards\nArne", "msg_date": "Fri, 4 Jun 2021 00:10:25 +0000", "msg_from": "Arne Roland <A.Roland@index.de>", "msg_from_op": false, "msg_subject": "Re: PATCH: generate fractional cheapest paths in\n generate_orderedappend_path" }, { "msg_contents": "Hi Tomas,\n\nI don't think there is much work left to do here.\n\nDid you have a look at the test case? Did it make sense to you?\n\nAnd I am sorry. I had another look at this and it seems I was confused (again).\n\nFrom: Arne Roland\nSent: Monday, April 26, 2021 13:00\nTo: Tomas Vondra; pgsql-hackers\nSubject: Re: PATCH: generate fractional cheapest paths in generate_orderedappend_path\n\n> I think it should. We have a ParallelAppend node after all.\n> I'm not really familiar with the way get_cheapest_fractional_path_for_pathkeys is used, but a quick search suggests to\n> me, that build_minmax_path was thus far the only one using it. And minmax paths are never parallel safe anyway. I think that is the reason it doesn't do that already.\n\nThe whole segment were are talking about obviously assumes require_parallel_safe is not needed. I wasn't aware that in set_append_rel_size. And I just realized there is a great comment explaining why it rightfully does so:\n /*\n * If any live child is not parallel-safe, treat the whole appendrel\n * as not parallel-safe. In future we might be able to generate plans\n * in which some children are farmed out to workers while others are\n * not; but we don't have that today, so it's a waste to consider\n * partial paths anywhere in the appendrel unless it's all safe.\n * (Child rels visited before this one will be unmarked in\n * set_append_rel_pathlist().)\n */\nSo afaik we don't need to think further about this.\n\n\nFrom: Tomas Vondra <tomas.vondra@enterprisedb.com>\nSent: Thursday, June 3, 2021 22:57\nTo: Zhihong Yu\nCc: Arne Roland; pgsql-hackers\nSubject: Re: PATCH: generate fractional cheapest paths in generate_orderedappend_path\n> Actually, there are two comments\n>\n> /* XXX maybe we should have startup_new_fractional? */\n>\n> in the patch I posted - I completely forgot about that. But I think\n> that's a typo, I think - it should be\n>\n> /* XXX maybe we should have startup_neq_fractional? */\n>\n> and the new flag would work similarly to startup_neq_total, i.e. it's\n> pointless to add paths where startup == fractional cost.\n>\n> At least I think that was the idea when I wrote the patch, it way too\n> long ago.\n\n> Sorry, I almost forgot about this myself. I only got reminded upon seeing that again with different queries/tables.\n> Just to be sure I get this correctly: You mean startup_gt_fractional (cost) as an additional condition, right?\n\nCould you clarify that for me?\n\nRegards\nArne\n\n\n\n\n\n\n\n\n\n\nHi Tomas,\n\n\n\n\n\n\nI don't think there is much work left to do here.\n\n\n\n\nDid you have a look at the test case? Did it make sense to you? \n\n\n\n\nAnd I am sorry. I had another look at this and it seems I was confused (again).\n\n\n\n\n\n\nFrom: Arne Roland\nSent: Monday, April 26, 2021 13:00\nTo: Tomas Vondra; pgsql-hackers\nSubject: Re: PATCH: generate fractional cheapest paths in generate_orderedappend_path\n\n\n\n\n\n> I think it should. We have a ParallelAppend node after all.\n> I'm not really familiar with the way get_cheapest_fractional_path_for_pathkeys is used, but a quick search suggests to\n\n> me, that build_minmax_path was thus far the only one using it. And minmax paths are never parallel safe anyway. I think that is the reason it doesn't do that already.\n\n\n\n\nThe whole segment were are talking about obviously assumes\nrequire_parallel_safe is not needed. I wasn't aware that in set_append_rel_size. And I just realized there is a great comment explaining why it rightfully does so:\n         /*  \n         * If any live child is not parallel-safe, treat the whole appendrel\n         * as not parallel-safe.  In future we might be able to generate plans\n         * in which some children are farmed out to workers while others are\n         * not; but we don't have that today, so it's a waste to consider\n         * partial paths anywhere in the appendrel unless it's all safe.\n         * (Child rels visited before this one will be unmarked in\n         * set_append_rel_pathlist().)\n         */\nSo afaik we don't need to think further about this.\n\n\n\n\n\n\n\nFrom: Tomas Vondra <tomas.vondra@enterprisedb.com>\nSent: Thursday, June 3, 2021 22:57\nTo: Zhihong Yu\nCc: Arne Roland; pgsql-hackers\nSubject: Re: PATCH: generate fractional cheapest paths in generate_orderedappend_path\n> Actually, there are two comments\n>\n\n>        /* XXX maybe we should have startup_new_fractional? */\n>\n\n>\nin the patch I posted - I completely forgot about that. But I think\n>\nthat's a typo, I think - it should be\n>\n\n>         /* XXX maybe we should have startup_neq_fractional? */\n>\n>\nand the new flag would work similarly to startup_neq_total, i.e. it's\n>\npointless to add paths where startup == fractional cost.\n>\n\n>\nAt least I think that was the idea when I wrote the patch, it way too\n>\nlong ago.\n\n\n\n> Sorry, I almost forgot about this myself. I only got reminded upon seeing that again with different queries/tables.\n> Just to be sure I get this correctly: You mean\nstartup_gt_fractional (cost)\nas an additional condition, right?\n\n\nCould you clarify that for me?\n\n\nRegards\nArne", "msg_date": "Sat, 26 Jun 2021 15:50:49 +0000", "msg_from": "Arne Roland <A.Roland@index.de>", "msg_from_op": false, "msg_subject": "Re: PATCH: generate fractional cheapest paths in\n generate_orderedappend_path" }, { "msg_contents": "Afaiac we should add a simple testcase here, like I suggested in 477344d5f17c4a8e95d3a5bb6642718a<https://www.postgresql.org/message-id/477344d5f17c4a8e95d3a5bb6642718a%40index.de>. Apart from that I am not sure there is work to be done here.\n\n\nAm I wrong?\n\n\nRegards\n\nArne\n\n________________________________\nFrom: Arne Roland <A.Roland@index.de>\nSent: Saturday, June 26, 2021 5:50:49 PM\nTo: Tomas Vondra\nCc: pgsql-hackers\nSubject: Re: PATCH: generate fractional cheapest paths in generate_orderedappend_path\n\n\nHi Tomas,\n\nI don't think there is much work left to do here.\n\nDid you have a look at the test case? Did it make sense to you?\n\nAnd I am sorry. I had another look at this and it seems I was confused (again).\n\nFrom: Arne Roland\nSent: Monday, April 26, 2021 13:00\nTo: Tomas Vondra; pgsql-hackers\nSubject: Re: PATCH: generate fractional cheapest paths in generate_orderedappend_path\n\n> I think it should. We have a ParallelAppend node after all.\n> I'm not really familiar with the way get_cheapest_fractional_path_for_pathkeys is used, but a quick search suggests to\n> me, that build_minmax_path was thus far the only one using it. And minmax paths are never parallel safe anyway. I think that is the reason it doesn't do that already.\n\nThe whole segment were are talking about obviously assumes require_parallel_safe is not needed. I wasn't aware that in set_append_rel_size. And I just realized there is a great comment explaining why it rightfully does so:\n /*\n * If any live child is not parallel-safe, treat the whole appendrel\n * as not parallel-safe. In future we might be able to generate plans\n * in which some children are farmed out to workers while others are\n * not; but we don't have that today, so it's a waste to consider\n * partial paths anywhere in the appendrel unless it's all safe.\n * (Child rels visited before this one will be unmarked in\n * set_append_rel_pathlist().)\n */\nSo afaik we don't need to think further about this.\n\n\nFrom: Tomas Vondra <tomas.vondra@enterprisedb.com>\nSent: Thursday, June 3, 2021 22:57\nTo: Zhihong Yu\nCc: Arne Roland; pgsql-hackers\nSubject: Re: PATCH: generate fractional cheapest paths in generate_orderedappend_path\n> Actually, there are two comments\n>\n> /* XXX maybe we should have startup_new_fractional? */\n>\n> in the patch I posted - I completely forgot about that. But I think\n> that's a typo, I think - it should be\n>\n> /* XXX maybe we should have startup_neq_fractional? */\n>\n> and the new flag would work similarly to startup_neq_total, i.e. it's\n> pointless to add paths where startup == fractional cost.\n>\n> At least I think that was the idea when I wrote the patch, it way too\n> long ago.\n\n> Sorry, I almost forgot about this myself. I only got reminded upon seeing that again with different queries/tables.\n> Just to be sure I get this correctly: You mean startup_gt_fractional (cost) as an additional condition, right?\n\nCould you clarify that for me?\n\nRegards\nArne\n\n\n\n\n\n\n\n\n\n\nAfaiac we should add a simple testcase here, like I suggested in \n477344d5f17c4a8e95d3a5bb6642718a. Apart from that I am not sure there is work to be done here.\n\n\nAm I wrong?\n\n\nRegards\nArne\n\n\n\nFrom: Arne Roland <A.Roland@index.de>\nSent: Saturday, June 26, 2021 5:50:49 PM\nTo: Tomas Vondra\nCc: pgsql-hackers\nSubject: Re: PATCH: generate fractional cheapest paths in generate_orderedappend_path\n \n\n\n\n\nHi Tomas,\n\n\n\n\n\n\nI don't think there is much work left to do here.\n\n\n\n\nDid you have a look at the test case? Did it make sense to you? \n\n\n\n\nAnd I am sorry. I had another look at this and it seems I was confused (again).\n\n\n\n\n\n\nFrom: Arne Roland\nSent: Monday, April 26, 2021 13:00\nTo: Tomas Vondra; pgsql-hackers\nSubject: Re: PATCH: generate fractional cheapest paths in generate_orderedappend_path\n\n\n\n\n\n> I think it should. We have a ParallelAppend node after all.\n> I'm not really familiar with the way get_cheapest_fractional_path_for_pathkeys is used, but a quick search suggests to\n\n> me, that build_minmax_path was thus far the only one using it. And minmax paths are never parallel safe anyway. I think that is the reason it doesn't do that already.\n\n\n\n\nThe whole segment were are talking about obviously assumes\nrequire_parallel_safe is not needed. I wasn't aware that in set_append_rel_size. And I just realized there is a great comment explaining why it rightfully does so:\n         /*  \n         * If any live child is not parallel-safe, treat the whole appendrel\n         * as not parallel-safe.  In future we might be able to generate plans\n         * in which some children are farmed out to workers while others are\n         * not; but we don't have that today, so it's a waste to consider\n         * partial paths anywhere in the appendrel unless it's all safe.\n         * (Child rels visited before this one will be unmarked in\n         * set_append_rel_pathlist().)\n         */\nSo afaik we don't need to think further about this.\n\n\n\n\n\n\n\nFrom: Tomas Vondra <tomas.vondra@enterprisedb.com>\nSent: Thursday, June 3, 2021 22:57\nTo: Zhihong Yu\nCc: Arne Roland; pgsql-hackers\nSubject: Re: PATCH: generate fractional cheapest paths in generate_orderedappend_path\n> Actually, there are two comments\n>\n\n>        /* XXX maybe we should have startup_new_fractional? */\n>\n\n>\nin the patch I posted - I completely forgot about that. But I think\n>\nthat's a typo, I think - it should be\n>\n\n>         /* XXX maybe we should have startup_neq_fractional? */\n>\n>\nand the new flag would work similarly to startup_neq_total, i.e. it's\n>\npointless to add paths where startup == fractional cost.\n>\n\n>\nAt least I think that was the idea when I wrote the patch, it way too\n>\nlong ago.\n\n\n\n> Sorry, I almost forgot about this myself. I only got reminded upon seeing that again with different queries/tables.\n> Just to be sure I get this correctly: You mean\nstartup_gt_fractional (cost)\nas an additional condition, right?\n\n\nCould you clarify that for me?\n\n\nRegards\nArne", "msg_date": "Thu, 2 Dec 2021 14:58:28 +0000", "msg_from": "Arne Roland <A.Roland@index.de>", "msg_from_op": false, "msg_subject": "Re: PATCH: generate fractional cheapest paths in\n generate_orderedappend_path" }, { "msg_contents": "Hi,\n\nOn 12/2/21 15:58, Arne Roland wrote:\n> Afaiac we should add a simple testcase here, like I suggested in \n> 477344d5f17c4a8e95d3a5bb6642718a \n> <https://www.postgresql.org/message-id/477344d5f17c4a8e95d3a5bb6642718a%40index.de>. \n> Apart from that I am not sure there is work to be done here.\n> \n\nWell, I mentioned three open questions in my first message, and I don't \nthink we've really addressed them yet. We've agreed we don't need to add \nthe incremental sort here, but that leaves\n\n\n1) If get_cheapest_fractional_path_for_pathkeys returns NULL, should we \ndefault to cheapest_startup or cheapest_total?\n\n2) Should get_cheapest_fractional_path_for_pathkeys worry about \nrequire_parallel_safe? I think yes, but we need to update the patch.\n\nI'll take a closer look next week, once I get home from NYC, and I'll \nsubmit an improved version for the January CF.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 2 Dec 2021 20:58:44 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: generate fractional cheapest paths in\n generate_orderedappend_path" }, { "msg_contents": "Hi,\n\nthanks for the reply!\n\nFrom: Tomas Vondra <tomas.vondra@enterprisedb.com>\nSent: Thursday, December 2, 2021 20:58\nSubject: Re: PATCH: generate fractional cheapest paths in generate_orderedappend_path\n> [...]\n> Well, I mentioned three open questions in my first message, and I don't\n> think we've really addressed them yet. We've agreed we don't need to add\n> the incremental sort here, but that leaves\n>\n>\n> 1) If get_cheapest_fractional_path_for_pathkeys returns NULL, should we\n> default to cheapest_startup or cheapest_total?\n\nI think it's reasonable to use cheapest_total like we are doing now. I hardly see any reason to change it.\nThe incremental sort case you mentioned, seems like the only case that plan might be interesting. If we really want that to happen, we probably should check for that separately, i.e. having startup_fractional. Even though this is a fairly special case as it's mostly relevant for partitionwise joins, I'm still not convinced it's worth the cpu cycles. The point is that in most cases factional and startup_fractional will be the same anyways.\nAnd I suspect, even if they aren't, solving that from an application developers point of view, is in most cases not that difficult. One can usually match the pathkey. I personally had a lot of real world issues with missing fractional paths using primary keys. I think it's worth noting that everything will likely match the partition keys anyways, because otherwise there is no chance of doing a merge append.\nIf I am not mistaken, in case we end up doing a full sort, the cheapest_total path should be completely sufficient.\n\n> 2) Should get_cheapest_fractional_path_for_pathkeys worry about\n> require_parallel_safe? I think yes, but we need to update the patch.\n\nI admit, that such a version of get_cheapest_fractional_path_for_pathkeys could be consistent with other functions. And I was confused about that before. But I am not sure what to use require_parallel_safe for. build_minmax_path doesn't care, they are never parallel safe. And afaict generate_orderedappend_paths cares neither, it considers all plans. For instance when it calls get_cheapest_path_for_pathkeys, it sets require_parallel_safe just to false as well.\n\n> I'll take a closer look next week, once I get home from NYC, and I'll\n> submit an improved version for the January CF.\n\nThank you for your work! The current patch, apart from the comments/regression tests, seems pretty reasonable to me.\n\nRegards\nArne\n\n\n\n\n\n\n\n\n\nHi,\n\nthanks for the reply!\n\nFrom: Tomas Vondra <tomas.vondra@enterprisedb.com>\nSent: Thursday, December 2, 2021 20:58\nSubject: Re: PATCH: generate fractional cheapest paths in generate_orderedappend_path\n> [...]\n> Well, I mentioned three open questions in my first message, and I don't\n> think we've really addressed them yet. We've agreed we don't need to add\n> the incremental sort here, but that leaves\n>\n>\n> 1) If get_cheapest_fractional_path_for_pathkeys returns NULL, should we\n> default to cheapest_startup or cheapest_total?\n\nI think it's reasonable to use cheapest_total like we are doing now. I hardly see any reason to change it.\nThe incremental sort case you mentioned, seems like the only case that plan might be interesting. If we really want that to happen, we probably should check for that separately, i.e. having startup_fractional. Even though this is a fairly special case as it's\n mostly relevant for partitionwise joins, I'm still not convinced it's worth the cpu cycles. The point is that in most cases factional and startup_fractional will be the same anyways.\nAnd I suspect, even if they aren't, solving that from an application developers point of view, is in most cases not that difficult. One can usually match the pathkey. I personally had a lot of real world issues with missing fractional paths using primary keys.\n I think it's worth noting that everything will likely match the partition keys anyways, because otherwise there is no chance of doing a merge append.\nIf I am not mistaken, in case we end up doing a full sort, the cheapest_total path should be completely sufficient.\n\n> 2) Should get_cheapest_fractional_path_for_pathkeys worry about\n> require_parallel_safe? I think yes, but we need to update the patch.\n\nI admit, that such a version of get_cheapest_fractional_path_for_pathkeys could be consistent with other functions. And I was confused about that before. But I am not sure what to use require_parallel_safe for. build_minmax_path doesn't care, they are never\n parallel safe. And afaict generate_orderedappend_paths cares neither, it considers all plans. For instance when it calls get_cheapest_path_for_pathkeys, it sets require_parallel_safe just to false as well.\n\n> I'll take a closer look next week, once I get home from NYC, and I'll\n> submit an improved version for the January CF.\n\nThank you for your work! The current patch, apart from the comments/regression tests, seems pretty reasonable to me.\n\nRegards\nArne", "msg_date": "Thu, 9 Dec 2021 23:51:02 +0000", "msg_from": "Arne Roland <A.Roland@index.de>", "msg_from_op": false, "msg_subject": "Re: PATCH: generate fractional cheapest paths in\n generate_orderedappend_path" }, { "msg_contents": "On 12/10/21 00:51, Arne Roland wrote:\n> Hi,\n> \n> thanks for the reply!\n> \n> From: Tomas Vondra <tomas.vondra@enterprisedb.com>\n> Sent: Thursday, December 2, 2021 20:58\n> Subject: Re: PATCH: generate fractional cheapest paths in \n> generate_orderedappend_path\n> > [...]\n> > Well, I mentioned three open questions in my first message, and I don't\n> > think we've really addressed them yet. We've agreed we don't need to add\n> > the incremental sort here, but that leaves\n> >\n> >\n> > 1) If get_cheapest_fractional_path_for_pathkeys returns NULL, should we\n> > default to cheapest_startup or cheapest_total?\n> \n> I think it's reasonable to use cheapest_total like we are doing now. I \n> hardly see any reason to change it.\n\nTrue, it's a reasonable first step.\n\nEither we generate the same plan as today (with cheapest_total), or \nmaybe a better one (if we find a cheaper fractional path with matching \npathkeys). It's true we could do better, but that's life - it's not like \nwe consider every possible path everywhere.\n\n> The incremental sort case you mentioned, seems like the only case that \n> plan might be interesting. If we really want that to happen, we probably \n> should check for that separately, i.e. having startup_fractional. Even \n> though this is a fairly special case as it's mostly relevant for \n> partitionwise joins, I'm still not convinced it's worth the cpu cycles. \n> The point is that in most cases factional and startup_fractional will be \n> the same anyways.\n\nI don't think we can simply check for startup_fractional (although, I'm \nnot sure I fully understand what would that be). I think what we should \nreally do in this case is walking all the paths, ensuring it's properly \nsorted (with either full or incremental sort), and then pick the \ncheapest fractional path from these sorted paths. But I agree that seems \npretty expensive.\n\n> And I suspect, even if they aren't, solving that from an application \n> developers point of view, is in most cases not that difficult. One can \n> usually match the pathkey. I personally had a lot of real world issues \n> with missing fractional paths using primary keys. I think it's worth \n> noting that everything will likely match the partition keys anyways, \n> because otherwise there is no chance of doing a merge append.\n\nYeah, I think you're right.\n\n> If I am not mistaken, in case we end up doing a full sort, the \n> cheapest_total path should be completely sufficient.\n> \n\nDefinitely true.\n\n> > 2) Should get_cheapest_fractional_path_for_pathkeys worry about\n> > require_parallel_safe? I think yes, but we need to update the patch.\n> \n> I admit, that such a version of \n> get_cheapest_fractional_path_for_pathkeys could be consistent with other \n> functions. And I was confused about that before. But I am not sure what \n> to use require_parallel_safe for. build_minmax_path doesn't care, they \n> are never parallel safe. And afaict generate_orderedappend_paths cares \n> neither, it considers all plans. For instance when it calls \n> get_cheapest_path_for_pathkeys, it sets require_parallel_safe just to \n> false as well.\n> \n\nTrue as well. It's looks a bit strange, but you're right neither place \ncares about parallel safety.\n\n> > I'll take a closer look next week, once I get home from NYC, and I'll\n> > submit an improved version for the January CF.\n> \n> Thank you for your work! The current patch, apart from the \n> comments/regression tests, seems pretty reasonable to me.\n> \n\nAttached is a cleaned-up patch, with a simple regression test. I'll mark \nthis as RFC and get it committed in a couple days.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sat, 11 Dec 2021 02:34:30 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: generate fractional cheapest paths in\n generate_orderedappend_path" }, { "msg_contents": "Pushed, after clarifying the comments a bit.\n\nI also looked into what would it take to consider incremental paths, and \nin principle it doesn't seem all that complicated. The attached PoC \npatch extends get_cheapest_fractional_path_for_pathkeys() to optionally \nbuild incremental sort on paths if needed. There are two GUCs that make \nexperimenting simpler:\n\n* enable_fractional_paths - disables fractional paths entirely, i.e. we \nget behavior without the part I already pushed\n\n* enable_fractional_incremental_paths - disables the incremental sort \npart added by the attached patch\n\nWith this, I get this plan (see the test in partitionwise_join.sql)\n\ntest=# EXPLAIN (COSTS OFF)\ntest-# SELECT * FROM fract_t x LEFT JOIN fract_t y USING (id1, id2) \nORDER BY id1 ASC, id2 ASC LIMIT 10;\n QUERY PLAN \n\n------------------------------------------------------------------------------\n Limit\n -> Merge Left Join\n Merge Cond: ((x.id1 = y.id1) AND (x.id2 = y.id2))\n -> Append\n -> Index Only Scan using fract_t0_id1_id2_idx on\n fract_t0 x_1\n -> Incremental Sort\n Sort Key: x_2.id1, x_2.id2\n Presorted Key: x_2.id1\n -> Index Scan using fract_t1_pkey on fract_t1 x_2\n -> Materialize\n -> Append\n -> Incremental Sort\n Sort Key: y_1.id1, y_1.id2\n Presorted Key: y_1.id1\n -> Index Scan using fract_t0_pkey on\n fract_t0 y_1\n Index Cond: (id1 = id1)\n Filter: (id2 = id2)\n -> Incremental Sort\n Sort Key: y_2.id1, y_2.id2\n Presorted Key: y_2.id1\n -> Index Scan using fract_t1_pkey on\n fract_t1 y_2\n Index Cond: (id1 = id1)\n Filter: (id2 = id2)\n(23 rows)\n\ninstead of\n\n QUERY PLAN \n\n------------------------------------------------------------------------------\n Limit\n -> Incremental Sort\n Sort Key: x.id1, x.id2\n Presorted Key: x.id1\n -> Merge Left Join\n Merge Cond: (x.id1 = y.id1)\n Join Filter: (x.id2 = y.id2)\n -> Append\n -> Index Scan using fract_t0_pkey on fract_t0 x_1\n -> Index Scan using fract_t1_pkey on fract_t1 x_2\n -> Materialize\n -> Append\n -> Index Scan using fract_t0_pkey on\n fract_t0 y_1\n -> Index Scan using fract_t1_pkey on\n fract_t1 y_2\n(14 rows)\n\ni.e. the incremental sorts moved below the merge join (and the cost is \nlower, but that's not shown here).\n\nSo that seems reasonable, but there's a couple issues too:\n\n1) Planning works (hence we can run explain), but execution fails \nbecause of segfault in CheckVarSlotCompatibility - it gets NULL slot for \nsome reason. I haven't looked into the details, but I'd guess we need to \npass a different rel to create_incrementalsort_path, not childrel.\n\n2) enable_partitionwisejoin=on seems to cause some confusion, because it \nresults in picking a different plan with higher cost. But that's clearly \nnot because of this patch.\n\n3) I'm still a bit skeptical about the cost of this implementation - it \nbuilds the incrementalsort path, just to throw most of the paths away. \nIt'd be much better to just calculate the cost, which should be much \ncheaper, and only do the heavy work for the one \"best\" path.\n\n4) One thing I did not realize before is what pathkeys we even consider \nhere. Imagine you have two tables:\n\n CREATE TABLE t1 (a int, b int, primary key (a));\n CREATE TABLE t2 (a int, b int, primary key (a));\n\nand query\n\n SELECT * FROM t1 JOIN t2 USING (a,b);\n\nIt seems reasonable to also consider pathkeys (a,b) because that's make \ne.g. mergejoin much cheaper, right? But no, we'll not do that - we only \nconsider pathkeys that at least one child relation has, so in this case \nit's just (a). Which means we'll never consider incremental sort for \nthis particular example.\n\nIt's a bit strange, because it's enough to create index on (a,b) for \njust one of the relations, and it'll suddenly consider building \nincremental sort on both sides.\n\n\nI don't plan to pursue this further at this point, so I pushed the first \npart because that's a fairly simple improvement over what we have now. \nBut it seems like a fairly promising area for improvements.\n\nAlso, the non-intuitive effects of enabling partition-wise joins (i.e. \npicking plans with higher estimated costs) is something worth exploring, \nI guess. But that's a separate issue.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 12 Jan 2022 23:43:04 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: generate fractional cheapest paths in\n generate_orderedappend_path" }, { "msg_contents": "FWIW this is now marked as committed. I've created a separate entry in \nthe next CF for the incremental sort part.\n\n\nhttps://commitfest.postgresql.org/37/3513/\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 13 Jan 2022 17:20:45 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: generate fractional cheapest paths in\n generate_orderedappend_path" }, { "msg_contents": "Hi!\n\n> From: Tomas Vondra <tomas.vondra@enterprisedb.com>\n> Subject: Re: PATCH: generate fractional cheapest paths in generate_orderedappend_path\n>\n> test-# SELECT * FROM fract_t x LEFT JOIN fract_t y USING (id1, id2)\n> ORDER BY id1 ASC, id2 ASC LIMIT 10;\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------\n> Limit\n> -> Merge Left Join\n> Merge Cond: ((x.id1 = y.id1) AND (x.id2 = y.id2))\n> -> Append\n> -> Index Only Scan using fract_t0_id1_id2_idx on\n> fract_t0 x_1\n> -> Incremental Sort\n> Sort Key: x_2.id1, x_2.id2\n> Presorted Key: x_2.id1\n> -> Index Scan using fract_t1_pkey on fract_t1 x_2\n> -> Materialize\n> -> Append\n> -> Incremental Sort\n> Sort Key: y_1.id1, y_1.id2\n> Presorted Key: y_1.id1\n> -> Index Scan using fract_t0_pkey on\n> fract_t0 y_1\n> Index Cond: (id1 = id1)\n> Filter: (id2 = id2)\n> -> Incremental Sort\n> Sort Key: y_2.id1, y_2.id2\n> Presorted Key: y_2.id1\n> -> Index Scan using fract_t1_pkey on\n> fract_t1 y_2\n> Index Cond: (id1 = id1)\n> Filter: (id2 = id2)\n> (23 rows)\n> [...]\n> So that seems reasonable\n\nMaybe I'm just slow, but that doesn't seem reasonable to me. It doesn't look like a valid plan to me. Sure all the nodes are arranged like I'd like them to be. But what are the id1/id2 bound we have in the index and filter conditions?\n\n> [...]but there's a couple issues too:\n>\n> 1) Planning works (hence we can run explain), but execution fails\n> because of segfault in CheckVarSlotCompatibility - it gets NULL slot for\n> some reason. I haven't looked into the details, but I'd guess we need to\n> pass a different rel to create_incrementalsort_path, not childrel.\n\nIn case my above concern is valid, maybe the executor is just as confused as I am. Such conditions should generate VarSlots, no? If so, where should they come from?\n\nSadly I don't have time to debug that in depth today.\n\n> 2) enable_partitionwisejoin=on seems to cause some confusion, because it\n> results in picking a different plan with higher cost. But that's clearly\n> not because of this patch.\n\nShort version: I'd neither conceptually expect costs to be lower in any case, nor would that be desirable, because our estimates aren't perfect.\n\nLong version: What do you mean by confusion. The plan I get with the patch doesn't seem to confusing to me. Generally something like that is to be expected. enable_partitionwisejoin changes the way this planing works by rewriting the entire query effectively rule based. So we end up with a completely different query. I'd in particular expect slightly different startup costs.\nSo if we activate this we consider completely different plans, I struggle to come up with a meaningful example where there is any overlap at all. Thus it doesn't surprise me conceptually.\n From practical experience I'd say: If they are about the same plan, the costs estimates work somewhat okish.\nIf we change the way we compose the nodes together, we sometimes end up with radical different costs for doing the same. While I suspect there are a lot of corner cases causing this, I've seen quite a few where this was due to the fact, that our planer often has insignificant statistics to know something and takes a guess. This has gotten way better of recent years, but it's in particular for non-trivial joins still a problem in practice.\n\n> 3) I'm still a bit skeptical about the cost of this implementation - it\n> builds the incrementalsort path, just to throw most of the paths away.\n> It'd be much better to just calculate the cost, which should be much\n> cheaper, and only do the heavy work for the one \"best\" path.\n\nMaybe we should profile this to get a rough estimate, how much time we spend building these incremental paths. From a code perspective it's non trivial to me where the time is lost.\n\n> 4) One thing I did not realize before is what pathkeys we even consider\n> here. Imagine you have two tables:\n>\n> CREATE TABLE t1 (a int, b int, primary key (a));\n> CREATE TABLE t2 (a int, b int, primary key (a));\n>\n> and query\n>\n> SELECT * FROM t1 JOIN t2 USING (a,b);\n>\n> It seems reasonable to also consider pathkeys (a,b) because that's make\n> e.g. mergejoin much cheaper, right? But no, we'll not do that - we only\n> consider pathkeys that at least one child relation has, so in this case\n> it's just (a). Which means we'll never consider incremental sort for\n> this particular example.\n>\n> It's a bit strange, because it's enough to create index on (a,b) for\n> just one of the relations, and it'll suddenly consider building\n> incremental sort on both sides.\n\nI don't find that surprising, because the single index *might* make the incremental sort cheaper for the join *without* considering any external sort order.\nSo we would be switching up the incremental sort and the mergejoin, in case we need to sort anyways. That would mean considering also the sort order, that might be relevant on the outside. Sounds like an interesting idea for a later patch.\n\n> I don't plan to pursue this further at this point, so I pushed the first\n> part because that's a fairly simple improvement over what we have now.\n> But it seems like a fairly promising area for improvements.\n\nI think 1) is pretty important, so we should sort that out sooner than later. Apart form that: :+1:\nThank you!\n\nRegards\nArne\n\n\n\n\n\n\n\n\n\n Hi!\n\n> From: Tomas Vondra <tomas.vondra@enterprisedb.com>\n> Subject: Re: PATCH: generate fractional cheapest paths in generate_orderedappend_path\n>  \n> test-# SELECT * FROM fract_t x LEFT JOIN fract_t y USING (id1, id2)\n> ORDER BY id1 ASC, id2 ASC LIMIT 10;\n>                                    QUERY PLAN\n> \n> ------------------------------------------------------------------------------\n>   Limit\n>     ->  Merge Left Join\n>           Merge Cond: ((x.id1 = y.id1) AND (x.id2 = y.id2))\n>           ->  Append\n>                 ->  Index Only Scan using fract_t0_id1_id2_idx on\n>                                           fract_t0 x_1\n>                 ->  Incremental Sort\n>                       Sort Key: x_2.id1, x_2.id2\n>                       Presorted Key: x_2.id1\n>                       ->  Index Scan using fract_t1_pkey on fract_t1 x_2\n>           ->  Materialize\n>                 ->  Append\n>                       ->  Incremental Sort\n>                             Sort Key: y_1.id1, y_1.id2\n>                             Presorted Key: y_1.id1\n>                             ->  Index Scan using fract_t0_pkey on\n>                                                  fract_t0 y_1\n>                                   Index Cond: (id1 = id1)\n>                                   Filter: (id2 = id2)\n>                       ->  Incremental Sort\n>                             Sort Key: y_2.id1, y_2.id2\n>                             Presorted Key: y_2.id1\n>                             ->  Index Scan using fract_t1_pkey on\n>                                                  fract_t1 y_2\n>                                   Index Cond: (id1 = id1)\n>                                   Filter: (id2 = id2)\n> (23 rows)\n> [...]\n> So that seems reasonable\n\nMaybe I'm just slow, but that doesn't seem reasonable to me. It doesn't look like a valid plan to me. Sure all the nodes are arranged like I'd like them to be. But what are the id1/id2 bound we have in the index and filter conditions?\n\n> [...]but there's a couple issues too:\n> \n> 1) Planning works (hence we can run explain), but execution fails\n> because of segfault in CheckVarSlotCompatibility - it gets NULL slot for\n> some reason. I haven't looked into the details, but I'd guess we need to\n> pass a different rel to create_incrementalsort_path, not childrel.\n\nIn case my above concern is valid, maybe the executor is just as confused as I am. Such conditions should generate\nVarSlots, no? If so, where should they come from?\n\nSadly I don't have time to debug that in depth today.\n\n> 2) enable_partitionwisejoin=on seems to cause some confusion, because it\n> results in picking a different plan with higher cost. But that's clearly\n> not because of this patch.\n\n\n\nShort version: I'd neither conceptually expect costs to be lower in any case, nor would that be desirable, because our estimates aren't perfect.\n\n\nLong version: What do you mean by confusion. The plan I get with the patch doesn't seem to confusing to me. Generally something like that is to be expected. enable_partitionwisejoin changes the way this planing works by rewriting the entire query effectively\n rule based. So we end up with a completely different query. I'd in particular expect slightly different startup costs.\nSo if we activate this we consider completely different plans, I struggle to come up with a meaningful example where there is any overlap at all. Thus it doesn't surprise me conceptually.\n From practical experience I'd say: If they are about the same plan, the costs estimates work somewhat okish.\nIf we change the way we compose the nodes together, we sometimes end up with radical different costs for doing the same. While I suspect there are a lot of corner cases causing this, I've seen quite a few where this was due to the fact, that our planer often\n has insignificant statistics to know something and takes a guess. This has gotten way better of recent years, but it's in particular for non-trivial joins still a problem in practice.\n\n> 3) I'm still a bit skeptical about the cost of this implementation - it\n> builds the incrementalsort path, just to throw most of the paths away.\n> It'd be much better to just calculate the cost, which should be much\n> cheaper, and only do the heavy work for the one \"best\" path.\n\nMaybe we should profile this to get a rough estimate, how much time we spend building these incremental paths. From a code perspective it's non trivial to me where the time is lost.\n\n> 4) One thing I did not realize before is what pathkeys we even consider\n> here. Imagine you have two tables:\n> \n>     CREATE TABLE t1 (a int, b int, primary key (a));\n>     CREATE TABLE t2 (a int, b int, primary key (a));\n> \n> and query\n> \n>     SELECT * FROM t1 JOIN t2 USING (a,b);\n> \n> It seems reasonable to also consider pathkeys (a,b) because that's make\n> e.g. mergejoin much cheaper, right? But no, we'll not do that - we only\n> consider pathkeys that at least one child relation has, so in this case\n> it's just (a). Which means we'll never consider incremental sort for\n> this particular example.\n> \n> It's a bit strange, because it's enough to create index on (a,b) for\n> just one of the relations, and it'll suddenly consider building\n> incremental sort on both sides.\n\nI don't find that surprising, because the single index *might* make the incremental sort cheaper for the join *without* considering any external sort order.\nSo we would be switching up the incremental sort and the mergejoin, in case we need to sort anyways. That would mean considering also the sort order, that might be relevant on the outside. Sounds like an interesting idea for a later patch.\n\n> I don't plan to pursue this further at this point, so I pushed the first\n> part because that's a fairly simple improvement over what we have now.\n> But it seems like a fairly promising area for improvements.\n\nI think 1) is pretty important, so we should sort that out sooner than later. Apart form that: :+1:\nThank you!\n\nRegards\nArne", "msg_date": "Thu, 13 Jan 2022 20:12:14 +0000", "msg_from": "Arne Roland <A.Roland@index.de>", "msg_from_op": false, "msg_subject": "Re: PATCH: generate fractional cheapest paths in\n generate_orderedappend_path" }, { "msg_contents": "On 1/13/22 21:12, Arne Roland wrote:\n>  Hi!\n> \n>> From: Tomas Vondra <tomas.vondra@enterprisedb.com>\n>> Subject: Re: PATCH: generate fractional cheapest paths in\n> generate_orderedappend_path\n>>  \n>> test-# SELECT * FROM fract_t x LEFT JOIN fract_t y USING (id1, id2)\n>> ORDER BY id1 ASC, id2 ASC LIMIT 10;\n>>                                    QUERY PLAN\n>>\n>>\n> ------------------------------------------------------------------------------\n>>   Limit\n>>     ->  Merge Left Join\n>>           Merge Cond: ((x.id1 = y.id1) AND (x.id2 = y.id2))\n>>           ->  Append\n>>                 ->  Index Only Scan using fract_t0_id1_id2_idx on\n>>                                           fract_t0 x_1\n>>                 ->  Incremental Sort\n>>                       Sort Key: x_2.id1, x_2.id2\n>>                       Presorted Key: x_2.id1\n>>                       ->  Index Scan using fract_t1_pkey on fract_t1 x_2\n>>           ->  Materialize\n>>                 ->  Append\n>>                       ->  Incremental Sort\n>>                             Sort Key: y_1.id1, y_1.id2\n>>                             Presorted Key: y_1.id1\n>>                             ->  Index Scan using fract_t0_pkey on\n>>                                                  fract_t0 y_1\n>>                                   Index Cond: (id1 = id1)\n>>                                   Filter: (id2 = id2)\n>>                       ->  Incremental Sort\n>>                             Sort Key: y_2.id1, y_2.id2\n>>                             Presorted Key: y_2.id1\n>>                             ->  Index Scan using fract_t1_pkey on\n>>                                                  fract_t1 y_2\n>>                                   Index Cond: (id1 = id1)\n>>                                   Filter: (id2 = id2)\n>> (23 rows)\n>> [...]\n>> So that seems reasonable\n> \n> Maybe I'm just slow, but that doesn't seem reasonable to me. It doesn't\n> look like a valid plan to me. Sure all the nodes are arranged like I'd\n> like them to be. But what are the id1/id2 bound we have in the index and\n> filter conditions?\n> \n\nI'm not claiming the plan is 100% correct, and you may have a point\nabout the index condition / filter in the materialize branch.\n\nBut the overall plan shape (with incremental sort nodes on both sides)\nseems reasonable to me. The materialize node is expected (incremental\nsort does not support rescans cheaply).\n\nMaybe that's not any cheaper than just doing merge join on the first\ncolumn, and filter on the second. But we should be able to decide that\nbased on cost, I think.\n\n>> [...]but there's a couple issues too:\n>>\n>> 1) Planning works (hence we can run explain), but execution fails\n>> because of segfault in CheckVarSlotCompatibility - it gets NULL slot for\n>> some reason. I haven't looked into the details, but I'd guess we need to\n>> pass a different rel to create_incrementalsort_path, not childrel.\n> \n> In case my above concern is valid, maybe the executor is just as\n> confused as I am. Such conditions should generate VarSlots, no? If so,\n> where should they come from?\n> \n\nYeah, something like that.\n\n> Sadly I don't have time to debug that in depth today.\n> \n>> 2) enable_partitionwisejoin=on seems to cause some confusion, because it\n>> results in picking a different plan with higher cost. But that's clearly\n>> not because of this patch.\n> \n> Short version: I'd neither conceptually expect costs to be lower in any\n> case, nor would that be desirable, because our estimates aren't perfect.\n> \n> Long version: What do you mean by confusion. The plan I get with the\n> patch doesn't seem to confusing to me. Generally something like that is\n> to be expected. enable_partitionwisejoin changes the way this planing\n> works by rewriting the entire query effectively rule based. So we end up\n> with a completely different query. I'd in particular expect slightly\n> different startup costs.\n> So if we activate this we consider completely different plans, I\n> struggle to come up with a meaningful example where there is any overlap\n> at all. Thus it doesn't surprise me conceptually.\n> From practical experience I'd say: If they are about the same plan, the\n> costs estimates work somewhat okish.\n> If we change the way we compose the nodes together, we sometimes end up\n> with radical different costs for doing the same. While I suspect there\n> are a lot of corner cases causing this, I've seen quite a few where this\n> was due to the fact, that our planer often has insignificant statistics\n> to know something and takes a guess. This has gotten way better of\n> recent years, but it's in particular for non-trivial joins still a\n> problem in practice.\n> \n\nBy confusion I meant that if you plan the query with partitionwise join\nenabled, you get a plan with cost X, and if you disable it you get a\ndifferent plan with cost Y, where (Y < X). Which is somewhat unexpected,\nbecause that seems to simply reduce the set of plans we explore.\n\nBut as you say, enable_partitionwise_join kinda \"switches\" between two\nplanning modes. Not sure why we don't try building both paths and decide\nbased on costs.\n\n>> 3) I'm still a bit skeptical about the cost of this implementation - it\n>> builds the incrementalsort path, just to throw most of the paths away.\n>> It'd be much better to just calculate the cost, which should be much\n>> cheaper, and only do the heavy work for the one \"best\" path.\n> \n> Maybe we should profile this to get a rough estimate, how much time we\n> spend building these incremental paths. From a code perspective it's non\n> trivial to me where the time is lost.\n> \n\nTBH I haven't really done any profiling, but I wouldn't be surprised if\nit got somewhat expensive for large number of child relations,\nespecially if there are a couple indexes on each. We do something\nsimilar for nestloop (see initial_cost_nestloop).\n\n>> 4) One thing I did not realize before is what pathkeys we even consider\n>> here. Imagine you have two tables:\n>>\n>>     CREATE TABLE t1 (a int, b int, primary key (a));\n>>     CREATE TABLE t2 (a int, b int, primary key (a));\n>>\n>> and query\n>>\n>>     SELECT * FROM t1 JOIN t2 USING (a,b);\n>>\n>> It seems reasonable to also consider pathkeys (a,b) because that's make\n>> e.g. mergejoin much cheaper, right? But no, we'll not do that - we only\n>> consider pathkeys that at least one child relation has, so in this case\n>> it's just (a). Which means we'll never consider incremental sort for\n>> this particular example.\n>>\n>> It's a bit strange, because it's enough to create index on (a,b) for\n>> just one of the relations, and it'll suddenly consider building\n>> incremental sort on both sides.\n> \n> I don't find that surprising, because the single index *might* make the\n> incremental sort cheaper for the join *without* considering any external\n> sort order.\n> So we would be switching up the incremental sort and the mergejoin, in\n> case we need to sort anyways. That would mean considering also the sort\n> order, that might be relevant on the outside. Sounds like an interesting\n> idea for a later patch.\n> \n\nI'm not sure it depends on the incremental sort. I suspect in some cases\nit might be faster to fully sort the merge join inputs, even if none of\nthe input paths has suitable pathkeys.\n\nFor example, if you do\n\n ... FROM t1 JOIN t2 USING (a,b) ...\n\nbut there are only indexes on (a), maybe sorting on (a,b) would win e.g.\nif there's a lot of duplicate values in (a)?\n\nI was thinking about this variation on example from the committed patch:\n\n CREATE TABLE fract_t (id1 BIGINT, id2 BIGINT)\n PARTITION BY RANGE (id1);\n\n CREATE TABLE fract_t0 PARTITION OF fract_t\n FOR VALUES FROM ('0') TO ('10');\n\n CREATE TABLE fract_t1 PARTITION OF fract_t\n FOR VALUES FROM ('10') TO ('20');\n\n CREATE INDEX ON fract_t(id1);\n\n INSERT INTO fract_t (id1, id2)\n SELECT i/100000, i FROM generate_series(0, 1999999) s(i);\n\n ANALYZE fract_t;\n\n -- not interested in nestloop/hashjoin paths for now\n set enable_hashjoin = off;\n set enable_nestloop = off;\n set max_parallel_workers_per_gather = 0;\n\n EXPLAIN (COSTS OFF)\n SELECT * FROM fract_t x JOIN fract_t y USING (id1, id2) order by id1;\n\nwhich is now planned like this:\n\n QUERY PLAN\n-----------------------------------------------------\n Merge Join\n Merge Cond: ((x.id1 = y.id1) AND (x.id2 = y.id2))\n -> Sort\n Sort Key: x.id1, x.id2\n -> Append\n -> Seq Scan on fract_t0 x_1\n -> Seq Scan on fract_t1 x_2\n -> Materialize\n -> Sort\n Sort Key: y.id1, y.id2\n -> Append\n -> Seq Scan on fract_t0 y_1\n -> Seq Scan on fract_t1 y_2\n(13 rows)\n\nBut maybe a plan like this might be better:\n\n QUERY PLAN\n-----------------------------------------------------\n Merge Join\n Merge Cond: ((x.id1 = y.id1) AND (x.id2 = y.id2))\n -> Incremental Sort\n Sort Key: x.id1, x.id2\n Presorted Key: x.id1\n -> Append\n -> Index Scan on fract_t0 x_1\n -> Index Scan on fract_t1 x_2\n -> Materialize\n -> Incremental Sort\n Sort Key: y.id1, y.id2\n Presorted Key: y.id1\n -> Append\n -> Index Scan on fract_t0 y_1\n -> Index Scan on fract_t1 y_2\n\nor maybe even Incremental Sort + (Merge) Append on top. Which is what I\nwas trying to achieve with the experimental patch.\n\n\nFWIW I did briefly look at the Merge Join + Incremental Sort plan too,\nand it seems we don't consider incremental sorts there either. AFAICS\nadd_paths_to_joinrel justs call sort_inner_and_outer, which determines\ninteresting pathkeys in select_outer_pathkeys_for_merge, and I guess\nthat only considers pathkeys usable for full sort. In any case, we don't\nactually add sort paths - we construct sort plans by calling make_sort\nin create_mergejoin_plan. So there's not much chance for incremental\nsort at all. That's kinda unfortunate, I guess. It's one of the\nnon-pathified places that we ignored while adding incremental sort.\n\n>> I don't plan to pursue this further at this point, so I pushed the first\n>> part because that's a fairly simple improvement over what we have now.\n>> But it seems like a fairly promising area for improvements.\n> \n> I think 1) is pretty important, so we should sort that out sooner than\n> later. Apart form that: :+1:\n> Thank you!\n> \n\nI agree it's worth investigating and experimenting with. We may end up\nrealizing those plans are not worth it, but we won't know until we try.\n\nIt may require replacing some of the hard-coded logic in createplan.c\nwith constructing regular alternative paths. IIRC we even did some of\nthis work in the incremental sort patch at some point, but then ripped\nthat out to keep the patch smaller / simpler ... need to look at it again.\n\nAre you interested / willing to do some of this work?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 14 Jan 2022 01:39:30 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: generate fractional cheapest paths in\n generate_orderedappend_path" }, { "msg_contents": "Hi,\n\nOn 2022-01-14 01:39:30 +0100, Tomas Vondra wrote:\n> Are you interested / willing to do some of this work?\n\nThis patch hasn't moved in the last two months. I think it may be time to\nmark it as returned with feedback for now?\n\nIt's also failing tests, and has done so for months:\n\nhttps://cirrus-ci.com/task/5308087077699584\nhttps://api.cirrus-ci.com/v1/artifact/task/5308087077699584/log/src/test/regress/regression.diffs\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Mar 2022 17:18:29 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: PATCH: generate fractional cheapest paths in\n generate_orderedappend_path" }, { "msg_contents": "On 3/22/22 01:18, Andres Freund wrote:\n> Hi,\n> \n> On 2022-01-14 01:39:30 +0100, Tomas Vondra wrote:\n>> Are you interested / willing to do some of this work?\n> \n> This patch hasn't moved in the last two months. I think it may be time to\n> mark it as returned with feedback for now?\n> \n> It's also failing tests, and has done so for months:\n> \n> https://cirrus-ci.com/task/5308087077699584\n> https://api.cirrus-ci.com/v1/artifact/task/5308087077699584/log/src/test/regress/regression.diffs\n> \n> Greetings,\n> \n\nYeah. I think it's a useful improvement, but it needs much more work\nthan is doable by the end of this CF. RwF seems about right.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 22 Mar 2022 01:33:25 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: generate fractional cheapest paths in\n generate_orderedappend_path" } ]
[ { "msg_contents": "Hi\n\ntoday I worked on postgres's server used for critical service. Because the\napplication is very specific, we had to do final tuning on production\nserver. I fix lot of queries, but I am not able to detect fast queries that\ndoes full scan of middle size tables - to 1M rows. Surely I wouldn't log\nall queries. Now, there are these queries with freq 10 per sec.\n\nCan be nice to have a possibility to set a log of queries that do full\nscan and read more tuples than is specified limit or that does full scan of\nspecified tables.\n\nWhat do you think about the proposed feature?\n\nRegards\n\nPavel\n\nHitoday I worked on postgres's server used for critical service. Because the application is very specific, we had to do final tuning on production server. I fix lot of queries, but I am not able to detect fast queries that does full scan of middle size tables - to 1M rows. Surely I wouldn't log all queries. Now, there are these queries with freq 10 per sec.Can be nice to have a possibility to set a log of  queries that do full scan and read more tuples than is specified limit or that does full scan of specified tables. What do you think about the proposed feature?RegardsPavel", "msg_date": "Sat, 17 Apr 2021 16:36:52 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "proposal - log_full_scan" }, { "msg_contents": "On Sat, Apr 17, 2021 at 04:36:52PM +0200, Pavel Stehule wrote:\n> today I worked on postgres's server used for critical service. Because the\n> application is very specific, we had to do final tuning on production\n> server. I fix lot of queries, but I am not able to detect fast queries that\n> does full scan of middle size tables - to 1M rows. Surely I wouldn't log\n> all queries. Now, there are these queries with freq 10 per sec.\n> \n> Can be nice to have a possibility to set a log of queries that do full\n> scan and read more tuples than is specified limit or that does full scan of\n> specified tables.\n> \n> What do you think about the proposed feature?\n\nAre you able to use auto_explain with auto_explain.log_min_duration ?\n\nThen you can search for query logs with\nmessage ~ 'Seq Scan .* \\(actual time=[.0-9]* rows=[0-9]{6,} loops=[0-9]*)'\n\nOr can you use pg_stat_all_tables.seq_scan ?\n\nBut it seems to me that filtering on the duration would be both a more\nimportant criteria and a more general one, than \"seq scan with number of rows\".\n\n| (split_part(message, ' ', 2)::float/1000 AS duration ..) WHERE duration>2222;\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 17 Apr 2021 10:09:31 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: proposal - log_full_scan" }, { "msg_contents": "so 17. 4. 2021 v 17:09 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Sat, Apr 17, 2021 at 04:36:52PM +0200, Pavel Stehule wrote:\n> > today I worked on postgres's server used for critical service. Because\n> the\n> > application is very specific, we had to do final tuning on production\n> > server. I fix lot of queries, but I am not able to detect fast queries\n> that\n> > does full scan of middle size tables - to 1M rows. Surely I wouldn't log\n> > all queries. Now, there are these queries with freq 10 per sec.\n> >\n> > Can be nice to have a possibility to set a log of queries that do full\n> > scan and read more tuples than is specified limit or that does full scan\n> of\n> > specified tables.\n> >\n> > What do you think about the proposed feature?\n>\n> Are you able to use auto_explain with auto_explain.log_min_duration ?\n>\n\nUnfortunately, I cannot use it. This server executes 5K queries per\nseconds, and I am afraid to decrease log_min_duration.\n\nThe logs are forwarded to the network and last time, when users played with\nit, then they had problems with the network.\n\nI am in a situation where I know there are queries faster than 100ms, I see\nso there should be fullscans from pg_stat_user_tables, but I don't see the\nqueries.\n\nThe fullscan of this table needs about 30ms and has 200K rows. So\ndecreasing log_min_duration to this value is very risky.\n\n\n\n> Then you can search for query logs with\n> message ~ 'Seq Scan .* \\(actual time=[.0-9]* rows=[0-9]{6,} loops=[0-9]*)'\n>\n> Or can you use pg_stat_all_tables.seq_scan ?\n>\n\nI use pg_stat_all_tables.seq_scan and I see seq scans there. But I need to\nknow the related queries.\n\n\n> But it seems to me that filtering on the duration would be both a more\n> important criteria and a more general one, than \"seq scan with number of\n> rows\".\n>\n> | (split_part(message, ' ', 2)::float/1000 AS duration ..) WHERE\n> duration>2222;\n>\n> --\n> Justin\n>\n\nso 17. 4. 2021 v 17:09 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:On Sat, Apr 17, 2021 at 04:36:52PM +0200, Pavel Stehule wrote:\n> today I worked on postgres's server used for critical service. Because the\n> application is very specific, we had to do final tuning on production\n> server. I fix lot of queries, but I am not able to detect fast queries that\n> does full scan of middle size tables - to 1M rows. Surely I wouldn't log\n> all queries. Now, there are these queries with freq 10 per sec.\n> \n> Can be nice to have a possibility to set a log of  queries that do full\n> scan and read more tuples than is specified limit or that does full scan of\n> specified tables.\n> \n> What do you think about the proposed feature?\n\nAre you able to use auto_explain with auto_explain.log_min_duration ?Unfortunately,  I cannot use it. This server executes 5K queries per seconds, and I am afraid to decrease log_min_duration.The logs are forwarded to the network and last time, when users played with it, then they had problems with the network.I am in a situation where I know there are queries faster than 100ms, I see so there should be fullscans from pg_stat_user_tables, but I don't see the queries.The fullscan of this table needs about 30ms and has 200K rows. So decreasing log_min_duration to this value is very risky.\n\nThen you can search for query logs with\nmessage ~ 'Seq Scan .* \\(actual time=[.0-9]* rows=[0-9]{6,} loops=[0-9]*)'\n\nOr can you use pg_stat_all_tables.seq_scan ?I use  pg_stat_all_tables.seq_scan and I see seq scans there. But I need to know the related queries.\n\nBut it seems to me that filtering on the duration would be both a more\nimportant criteria and a more general one, than \"seq scan with number of rows\".\n\n| (split_part(message, ' ', 2)::float/1000 AS duration ..) WHERE duration>2222;\n\n-- \nJustin", "msg_date": "Sat, 17 Apr 2021 17:22:59 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - log_full_scan" }, { "msg_contents": "On Sat, Apr 17, 2021 at 05:22:59PM +0200, Pavel Stehule wrote:\n> \n> The fullscan of this table needs about 30ms and has 200K rows. So\n> decreasing log_min_duration to this value is very risky.\n> \n> [...]\n> \n> I use pg_stat_all_tables.seq_scan and I see seq scans there. But I need to\n> know the related queries.\n\nMaybe you could use pg_qualstats ([1]) for that? It will give you the list of\nquals (with the underlying queryid) with a tag to specify if they were executed\nas an index scan or a sequential scan. It wouldn't detect queries doing\nsequential scan that don't have any qual for the underlying relations, but\nthose shouldn't be a concern in your use case.\n\nIf you setup some sampling, the overhead should be minimal.\n\n[1]: https://github.com/powa-team/pg_qualstats/\n\n\n", "msg_date": "Sun, 18 Apr 2021 00:54:50 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - log_full_scan" }, { "msg_contents": "so 17. 4. 2021 v 18:54 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> On Sat, Apr 17, 2021 at 05:22:59PM +0200, Pavel Stehule wrote:\n> >\n> > The fullscan of this table needs about 30ms and has 200K rows. So\n> > decreasing log_min_duration to this value is very risky.\n> >\n> > [...]\n> >\n> > I use pg_stat_all_tables.seq_scan and I see seq scans there. But I need\n> to\n> > know the related queries.\n>\n> Maybe you could use pg_qualstats ([1]) for that? It will give you the\n> list of\n> quals (with the underlying queryid) with a tag to specify if they were\n> executed\n> as an index scan or a sequential scan. It wouldn't detect queries doing\n> sequential scan that don't have any qual for the underlying relations, but\n> those shouldn't be a concern in your use case.\n>\n> If you setup some sampling, the overhead should be minimal.\n>\n> [1]: https://github.com/powa-team/pg_qualstats/\n\n\nIt has similar functionality - there is a problem with setting. The my idea\nis more simple - just\n\nset\n\nlog_fullscall_min_tupples = 100000\n\nor\n\nalter table xxx set log_fullscan_min_tupples = 0;\n\nand then the complete query can be found in the log.\n\nI think this can be really practical so it can be core functionality. And\nit can log the queries without\nquals too. The productions systems can be buggy and it is important to find\nbugs\n\nRegards\n\nPavel\n\nso 17. 4. 2021 v 18:54 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Sat, Apr 17, 2021 at 05:22:59PM +0200, Pavel Stehule wrote:\n> \n> The fullscan of this table needs about 30ms and has 200K rows. So\n> decreasing log_min_duration to this value is very risky.\n> \n> [...]\n> \n> I use  pg_stat_all_tables.seq_scan and I see seq scans there. But I need to\n> know the related queries.\n\nMaybe you could use pg_qualstats ([1]) for that?  It will give you the list of\nquals (with the underlying queryid) with a tag to specify if they were executed\nas an index scan or a sequential scan.  It wouldn't detect queries doing\nsequential scan that don't have any qual for the underlying relations, but\nthose shouldn't be a concern in your use case.\n\nIf you setup some sampling, the overhead should be minimal.\n\n[1]: https://github.com/powa-team/pg_qualstats/It has similar functionality - there is a problem with setting. The my idea is more simple - just set log_fullscall_min_tupples = 100000  or alter table xxx set log_fullscan_min_tupples = 0;and then the complete query can be found in the log.I think this can be really practical so it can be core functionality. And it can log the queries withoutquals too. The productions systems can be buggy and it is important to find bugsRegardsPavel", "msg_date": "Sat, 17 Apr 2021 19:54:34 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - log_full_scan" }, { "msg_contents": "On Sat, Apr 17, 2021 at 05:22:59PM +0200, Pavel Stehule wrote:\n> so 17. 4. 2021 v 17:09 odes�latel Justin Pryzby <pryzby@telsasoft.com> napsal:\n> \n> > On Sat, Apr 17, 2021 at 04:36:52PM +0200, Pavel Stehule wrote:\n> > > today I worked on postgres's server used for critical service. Because the\n> > > application is very specific, we had to do final tuning on production\n> > > server. I fix lot of queries, but I am not able to detect fast queries that\n> > > does full scan of middle size tables - to 1M rows. Surely I wouldn't log\n> > > all queries. Now, there are these queries with freq 10 per sec.\n> > >\n> > > Can be nice to have a possibility to set a log of queries that do full\n> > > scan and read more tuples than is specified limit or that does full scan of\n> > > specified tables.\n> > >\n> > > What do you think about the proposed feature?\n> >\n> > Are you able to use auto_explain with auto_explain.log_min_duration ?\n> \n> Unfortunately, I cannot use it. This server executes 5K queries per\n> seconds, and I am afraid to decrease log_min_duration.\n> \n> The logs are forwarded to the network and last time, when users played with\n> it, then they had problems with the network.\n..\n> The fullscan of this table needs about 30ms and has 200K rows. So\n> decreasing log_min_duration to this value is very risky.\n\nauto_explain.sample_rate should allow setting a sufficiently low value of\nlog_min_duration. It exists since v9.6.\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 17 Apr 2021 13:36:01 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: proposal - log_full_scan" }, { "msg_contents": "so 17. 4. 2021 v 20:36 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Sat, Apr 17, 2021 at 05:22:59PM +0200, Pavel Stehule wrote:\n> > so 17. 4. 2021 v 17:09 odesílatel Justin Pryzby <pryzby@telsasoft.com>\n> napsal:\n> >\n> > > On Sat, Apr 17, 2021 at 04:36:52PM +0200, Pavel Stehule wrote:\n> > > > today I worked on postgres's server used for critical service.\n> Because the\n> > > > application is very specific, we had to do final tuning on production\n> > > > server. I fix lot of queries, but I am not able to detect fast\n> queries that\n> > > > does full scan of middle size tables - to 1M rows. Surely I wouldn't\n> log\n> > > > all queries. Now, there are these queries with freq 10 per sec.\n> > > >\n> > > > Can be nice to have a possibility to set a log of queries that do\n> full\n> > > > scan and read more tuples than is specified limit or that does full\n> scan of\n> > > > specified tables.\n> > > >\n> > > > What do you think about the proposed feature?\n> > >\n> > > Are you able to use auto_explain with auto_explain.log_min_duration ?\n> >\n> > Unfortunately, I cannot use it. This server executes 5K queries per\n> > seconds, and I am afraid to decrease log_min_duration.\n> >\n> > The logs are forwarded to the network and last time, when users played\n> with\n> > it, then they had problems with the network.\n> ..\n> > The fullscan of this table needs about 30ms and has 200K rows. So\n> > decreasing log_min_duration to this value is very risky.\n>\n> auto_explain.sample_rate should allow setting a sufficiently low value of\n> log_min_duration. It exists since v9.6.\n>\n>\nIt cannot help - these queries are executed a few times per sec. In same\ntime this server execute 500 - 1000 other queries per sec\n\nRegards\n\nPavel\n\n\n> --\n> Justin\n>\n\nso 17. 4. 2021 v 20:36 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:On Sat, Apr 17, 2021 at 05:22:59PM +0200, Pavel Stehule wrote:\n> so 17. 4. 2021 v 17:09 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:\n> \n> > On Sat, Apr 17, 2021 at 04:36:52PM +0200, Pavel Stehule wrote:\n> > > today I worked on postgres's server used for critical service. Because the\n> > > application is very specific, we had to do final tuning on production\n> > > server. I fix lot of queries, but I am not able to detect fast queries that\n> > > does full scan of middle size tables - to 1M rows. Surely I wouldn't log\n> > > all queries. Now, there are these queries with freq 10 per sec.\n> > >\n> > > Can be nice to have a possibility to set a log of  queries that do full\n> > > scan and read more tuples than is specified limit or that does full scan of\n> > > specified tables.\n> > >\n> > > What do you think about the proposed feature?\n> >\n> > Are you able to use auto_explain with auto_explain.log_min_duration ?\n> \n> Unfortunately,  I cannot use it. This server executes 5K queries per\n> seconds, and I am afraid to decrease log_min_duration.\n> \n> The logs are forwarded to the network and last time, when users played with\n> it, then they had problems with the network.\n..\n> The fullscan of this table needs about 30ms and has 200K rows. So\n> decreasing log_min_duration to this value is very risky.\n\nauto_explain.sample_rate should allow setting a sufficiently low value of\nlog_min_duration.  It exists since v9.6.\nIt cannot help - these queries are executed a few times per sec. In same time this server execute 500 - 1000 other queries per secRegardsPavel \n-- \nJustin", "msg_date": "Sat, 17 Apr 2021 20:51:06 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - log_full_scan" }, { "msg_contents": "so 17. 4. 2021 v 20:51 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> so 17. 4. 2021 v 20:36 odesílatel Justin Pryzby <pryzby@telsasoft.com>\n> napsal:\n>\n>> On Sat, Apr 17, 2021 at 05:22:59PM +0200, Pavel Stehule wrote:\n>> > so 17. 4. 2021 v 17:09 odesílatel Justin Pryzby <pryzby@telsasoft.com>\n>> napsal:\n>> >\n>> > > On Sat, Apr 17, 2021 at 04:36:52PM +0200, Pavel Stehule wrote:\n>> > > > today I worked on postgres's server used for critical service.\n>> Because the\n>> > > > application is very specific, we had to do final tuning on\n>> production\n>> > > > server. I fix lot of queries, but I am not able to detect fast\n>> queries that\n>> > > > does full scan of middle size tables - to 1M rows. Surely I\n>> wouldn't log\n>> > > > all queries. Now, there are these queries with freq 10 per sec.\n>> > > >\n>> > > > Can be nice to have a possibility to set a log of queries that do\n>> full\n>> > > > scan and read more tuples than is specified limit or that does full\n>> scan of\n>> > > > specified tables.\n>> > > >\n>> > > > What do you think about the proposed feature?\n>> > >\n>> > > Are you able to use auto_explain with auto_explain.log_min_duration ?\n>> >\n>> > Unfortunately, I cannot use it. This server executes 5K queries per\n>> > seconds, and I am afraid to decrease log_min_duration.\n>> >\n>> > The logs are forwarded to the network and last time, when users played\n>> with\n>> > it, then they had problems with the network.\n>> ..\n>> > The fullscan of this table needs about 30ms and has 200K rows. So\n>> > decreasing log_min_duration to this value is very risky.\n>>\n>> auto_explain.sample_rate should allow setting a sufficiently low value of\n>> log_min_duration. It exists since v9.6.\n>>\n>>\n> It cannot help - these queries are executed a few times per sec. In same\n> time this server execute 500 - 1000 other queries per sec\n>\n\nmaybe this new option for server and for auto_explain can be just simple\n\nlog_seqscan = (minimum number of tuples from one relation)\nauto_explain.log_seqscan = (minimum number of tuples from one relation)\n\nThis is a similar feature like log_temp_files. Next step can be\nimplementing this feature like a table option.\n\nWhat do you think about it?\n\nRegards\n\nPavel\n\nThe extension like pg_qualstat is good, but it does different work. In\ncomplex applications I need to detect buggy (forgotten) queries - last week\nI found two queries over bigger tables without predicates. So the qualstat\ndoesn't help me. This is an application for a government with few (but for\ngovernment typical) specific: 1) the life cycle is short (one month), 2)\nthere is not slow start - from first moment the application will be used by\nmore hundred thousands people, 3) the application is very public - so any\nissues are very interesting for press and very unpleasant for politics, and\nin next step for all suppliers (there are high penalty for failures), and\nan admins are not happy from external extensions, 4) the budget is not too\nbig - there is not any performance testing environment\n\nFirst stages are covered well today. We can log and process very slow\nqueries, and fix it immediately - with CREATE INDEX CONCURRENTLY I can do\nit well on production servers too without high risk.\n\nBut the detection of some bad not too slow queries is hard. And as an\nexternal consultant I am not able to install any external extensions to the\nproduction environment for fixing some hot issues, The risk is not\nacceptable for project managers and I understand. So I have to use only\ntools available in Postgres.\n\n\n\n\n> Regards\n>\n> Pavel\n>\n>\n>> --\n>> Justin\n>>\n>\n\nso 17. 4. 2021 v 20:51 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:so 17. 4. 2021 v 20:36 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:On Sat, Apr 17, 2021 at 05:22:59PM +0200, Pavel Stehule wrote:\n> so 17. 4. 2021 v 17:09 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:\n> \n> > On Sat, Apr 17, 2021 at 04:36:52PM +0200, Pavel Stehule wrote:\n> > > today I worked on postgres's server used for critical service. Because the\n> > > application is very specific, we had to do final tuning on production\n> > > server. I fix lot of queries, but I am not able to detect fast queries that\n> > > does full scan of middle size tables - to 1M rows. Surely I wouldn't log\n> > > all queries. Now, there are these queries with freq 10 per sec.\n> > >\n> > > Can be nice to have a possibility to set a log of  queries that do full\n> > > scan and read more tuples than is specified limit or that does full scan of\n> > > specified tables.\n> > >\n> > > What do you think about the proposed feature?\n> >\n> > Are you able to use auto_explain with auto_explain.log_min_duration ?\n> \n> Unfortunately,  I cannot use it. This server executes 5K queries per\n> seconds, and I am afraid to decrease log_min_duration.\n> \n> The logs are forwarded to the network and last time, when users played with\n> it, then they had problems with the network.\n..\n> The fullscan of this table needs about 30ms and has 200K rows. So\n> decreasing log_min_duration to this value is very risky.\n\nauto_explain.sample_rate should allow setting a sufficiently low value of\nlog_min_duration.  It exists since v9.6.\nIt cannot help - these queries are executed a few times per sec. In same time this server execute 500 - 1000 other queries per secmaybe this new option for server and for auto_explain can be just simplelog_seqscan = (minimum number of tuples from one relation)auto_explain.log_seqscan = (minimum number of tuples from one relation)This is a similar feature like log_temp_files. Next step can be implementing this feature like a table option.What do you think about it? RegardsPavelThe extension like pg_qualstat is good, but it does different work. In complex applications I need to detect buggy (forgotten) queries - last week I found two queries over bigger tables without predicates. So the qualstat doesn't help me. This is an application for a government with few (but for government typical) specific: 1) the life cycle is short (one month), 2) there is not slow start - from first moment the application will be used by more hundred thousands people, 3) the application is very public - so any issues are very interesting for press and very unpleasant for politics, and in next step for all suppliers (there are high penalty for failures), and an admins are not happy from external extensions, 4) the budget is not too big - there is not any performance testing environment First stages are covered well today. We can log and process very slow queries, and fix it immediately - with CREATE INDEX CONCURRENTLY I can do it well on production servers too without high risk. But the detection of some bad not too slow queries is hard. And as an external consultant I am not able to install any external extensions to the production environment for fixing some hot issues, The risk is not acceptable for project managers and I understand. So I have to use only tools available in Postgres. RegardsPavel \n-- \nJustin", "msg_date": "Sun, 18 Apr 2021 06:21:56 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - log_full_scan" }, { "msg_contents": "On Sun, Apr 18, 2021 at 06:21:56AM +0200, Pavel Stehule wrote:\n> \n> The extension like pg_qualstat is good, but it does different work.\n\nYes definitely. It was just an idea if you needed something right now that\ncould more or less do what you needed, not saying that we shouldn't improve the\ncore :)\n\n> In\n> complex applications I need to detect buggy (forgotten) queries - last week\n> I found two queries over bigger tables without predicates. So the qualstat\n> doesn't help me. \n\nAlso not totally helpful but powa was created to detect problematic queries in\nsuch cases. It wouldn't say if it's because of a seq scan or not (so yes again\nwe need to improve that), but it would give you the slowest (or top consumer\nfor any resource) for a given time interval.\n\n> This is an application for a government with few (but for\n> government typical) specific: 1) the life cycle is short (one month), 2)\n> there is not slow start - from first moment the application will be used by\n> more hundred thousands people, 3) the application is very public - so any\n> issues are very interesting for press and very unpleasant for politics, and\n> in next step for all suppliers (there are high penalty for failures), and\n> an admins are not happy from external extensions, 4) the budget is not too\n> big - there is not any performance testing environment\n> \n> First stages are covered well today. We can log and process very slow\n> queries, and fix it immediately - with CREATE INDEX CONCURRENTLY I can do\n> it well on production servers too without high risk.\n> \n> But the detection of some bad not too slow queries is hard. And as an\n> external consultant I am not able to install any external extensions to the\n> production environment for fixing some hot issues, The risk is not\n> acceptable for project managers and I understand. So I have to use only\n> tools available in Postgres.\n\nYes I agree that having additional and more specialized tool in core postgres\nwould definitely help in similar scenario.\n\nI think that having some kind of threshold for seq scan (like the mentioned\nauto_explain.log_seqscan = XXX) in auto_explain would be the best approach, as\nyou really need the plan to know why a seq scan was chosen and if it was a\nreasonable choice or not.\n\n\n", "msg_date": "Sun, 18 Apr 2021 20:28:48 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - log_full_scan" }, { "msg_contents": "ne 18. 4. 2021 v 14:28 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> On Sun, Apr 18, 2021 at 06:21:56AM +0200, Pavel Stehule wrote:\n> >\n> > The extension like pg_qualstat is good, but it does different work.\n>\n> Yes definitely. It was just an idea if you needed something right now that\n> could more or less do what you needed, not saying that we shouldn't\n> improve the\n> core :)\n>\n> > In\n> > complex applications I need to detect buggy (forgotten) queries - last\n> week\n> > I found two queries over bigger tables without predicates. So the\n> qualstat\n> > doesn't help me.\n>\n> Also not totally helpful but powa was created to detect problematic\n> queries in\n> such cases. It wouldn't say if it's because of a seq scan or not (so yes\n> again\n> we need to improve that), but it would give you the slowest (or top\n> consumer\n> for any resource) for a given time interval.\n>\n> > This is an application for a government with few (but for\n> > government typical) specific: 1) the life cycle is short (one month), 2)\n> > there is not slow start - from first moment the application will be used\n> by\n> > more hundred thousands people, 3) the application is very public - so any\n> > issues are very interesting for press and very unpleasant for politics,\n> and\n> > in next step for all suppliers (there are high penalty for failures), and\n> > an admins are not happy from external extensions, 4) the budget is not\n> too\n> > big - there is not any performance testing environment\n> >\n> > First stages are covered well today. We can log and process very slow\n> > queries, and fix it immediately - with CREATE INDEX CONCURRENTLY I can do\n> > it well on production servers too without high risk.\n> >\n> > But the detection of some bad not too slow queries is hard. And as an\n> > external consultant I am not able to install any external extensions to\n> the\n> > production environment for fixing some hot issues, The risk is not\n> > acceptable for project managers and I understand. So I have to use only\n> > tools available in Postgres.\n>\n> Yes I agree that having additional and more specialized tool in core\n> postgres\n> would definitely help in similar scenario.\n>\n> I think that having some kind of threshold for seq scan (like the mentioned\n> auto_explain.log_seqscan = XXX) in auto_explain would be the best\n> approach, as\n> you really need the plan to know why a seq scan was chosen and if it was a\n> reasonable choice or not.\n>\n\nI would like to write this for core and for auto_explain too. I was in a\nsituation when I hadnot used auto_explain too. Although this extension is\nwidely used and then the risk is low.\n\nWhen I detect the query, then I can run the explanation manually. But sure\nI think so it can work well inside auto_explain\n\nRegards\n\nPavel\n\nne 18. 4. 2021 v 14:28 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Sun, Apr 18, 2021 at 06:21:56AM +0200, Pavel Stehule wrote:\n> \n> The extension like pg_qualstat is good, but it does different work.\n\nYes definitely.  It was just an idea if you needed something right now that\ncould more or less do what you needed, not saying that we shouldn't improve the\ncore :)\n\n> In\n> complex applications I need to detect buggy (forgotten) queries - last week\n> I found two queries over bigger tables without predicates. So the qualstat\n> doesn't help me. \n\nAlso not totally helpful but powa was created to detect problematic queries in\nsuch cases.  It wouldn't say if it's because of a seq scan or not (so yes again\nwe need to improve that), but it would give you the slowest (or top consumer\nfor any resource) for a given time interval.\n\n> This is an application for a government with few (but for\n> government typical) specific: 1) the life cycle is short (one month), 2)\n> there is not slow start - from first moment the application will be used by\n> more hundred thousands people, 3) the application is very public - so any\n> issues are very interesting for press and very unpleasant for politics, and\n> in next step for all suppliers (there are high penalty for failures), and\n> an admins are not happy from external extensions, 4) the budget is not too\n> big - there is not any performance testing environment\n> \n> First stages are covered well today. We can log and process very slow\n> queries, and fix it immediately - with CREATE INDEX CONCURRENTLY I can do\n> it well on production servers too without high risk.\n> \n> But the detection of some bad not too slow queries is hard. And as an\n> external consultant I am not able to install any external extensions to the\n> production environment for fixing some hot issues, The risk is not\n> acceptable for project managers and I understand. So I have to use only\n> tools available in Postgres.\n\nYes I agree that having additional and more specialized tool in core postgres\nwould definitely help in similar scenario.\n\nI think that having some kind of threshold for seq scan (like the mentioned\nauto_explain.log_seqscan = XXX) in auto_explain would be the best approach, as\nyou really need the plan to know why a seq scan was chosen and if it was a\nreasonable choice or not.I would like to write this for core and for auto_explain too. I was in a situation when I hadnot used auto_explain too. Although this extension is widely used and then the risk is low.When I detect the query, then I can run the explanation manually. But sure I think so it can work well inside auto_explainRegardsPavel", "msg_date": "Sun, 18 Apr 2021 16:09:17 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - log_full_scan" }, { "msg_contents": "ne 18. 4. 2021 v 16:09 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> ne 18. 4. 2021 v 14:28 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n> napsal:\n>\n>> On Sun, Apr 18, 2021 at 06:21:56AM +0200, Pavel Stehule wrote:\n>> >\n>> > The extension like pg_qualstat is good, but it does different work.\n>>\n>> Yes definitely. It was just an idea if you needed something right now\n>> that\n>> could more or less do what you needed, not saying that we shouldn't\n>> improve the\n>> core :)\n>>\n>> > In\n>> > complex applications I need to detect buggy (forgotten) queries - last\n>> week\n>> > I found two queries over bigger tables without predicates. So the\n>> qualstat\n>> > doesn't help me.\n>>\n>> Also not totally helpful but powa was created to detect problematic\n>> queries in\n>> such cases. It wouldn't say if it's because of a seq scan or not (so yes\n>> again\n>> we need to improve that), but it would give you the slowest (or top\n>> consumer\n>> for any resource) for a given time interval.\n>>\n>> > This is an application for a government with few (but for\n>> > government typical) specific: 1) the life cycle is short (one month), 2)\n>> > there is not slow start - from first moment the application will be\n>> used by\n>> > more hundred thousands people, 3) the application is very public - so\n>> any\n>> > issues are very interesting for press and very unpleasant for politics,\n>> and\n>> > in next step for all suppliers (there are high penalty for failures),\n>> and\n>> > an admins are not happy from external extensions, 4) the budget is not\n>> too\n>> > big - there is not any performance testing environment\n>> >\n>> > First stages are covered well today. We can log and process very slow\n>> > queries, and fix it immediately - with CREATE INDEX CONCURRENTLY I can\n>> do\n>> > it well on production servers too without high risk.\n>> >\n>> > But the detection of some bad not too slow queries is hard. And as an\n>> > external consultant I am not able to install any external extensions to\n>> the\n>> > production environment for fixing some hot issues, The risk is not\n>> > acceptable for project managers and I understand. So I have to use only\n>> > tools available in Postgres.\n>>\n>> Yes I agree that having additional and more specialized tool in core\n>> postgres\n>> would definitely help in similar scenario.\n>>\n>> I think that having some kind of threshold for seq scan (like the\n>> mentioned\n>> auto_explain.log_seqscan = XXX) in auto_explain would be the best\n>> approach, as\n>> you really need the plan to know why a seq scan was chosen and if it was a\n>> reasonable choice or not.\n>>\n>\n> I would like to write this for core and for auto_explain too. I was in a\n> situation when I hadnot used auto_explain too. Although this extension is\n> widely used and then the risk is low.\n>\n> When I detect the query, then I can run the explanation manually. But sure\n> I think so it can work well inside auto_explain\n>\n\nI tried to write the patch. It is harder work for core, than I expected,\nbecause there is not any good information if the query is top or not, so it\nis not easy to decide if we should log info or not.\n\nOn second hand, the modification of auto_explain is easy.\n\nI am sending the patch\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n>", "msg_date": "Mon, 19 Apr 2021 21:20:28 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - log_full_scan" }, { "msg_contents": "Looking at this I like the idea in principle, but I'm not convinced that\nauto_explain is the right tool for this. auto_explain is for identifying slow\nqueries, and what you are proposing is to identify queries with a certain\n\"shape\" (for lack of a better term) even if they aren't slow as per the\nlog_min_duration setting. If log_min_duration is deemed to crude due to query\nvolume then sample_rate is the tool. If sample_rate is also discarded, then\npg_stat_statements seems a better option.\n\nAlso, why just sequential scans (apart from it being this specific usecase)?\nIf the idea is to track aspects of execution which are deemed slow, then\ntracking for example spills etc would be just as valid. Do you have thoughts\non that?\n\nThat being said, a few comments on the patch:\n\n-\t(auto_explain_log_min_duration >= 0 && \\\n+\t((auto_explain_log_min_duration >= 0 || auto_explain_log_seqscan != -1) && \\\nIs there a reason to not follow the existing code and check for >= 0?\n\n+\tDefineCustomIntVariable(\"auto_explain.log_seqscan\",\nIt's only a matter of time before another node is proposed for logging, and\nthen we'll be stuck adding log_XXXnode GUCs. Is there a more future-proof way\nto do this?\n\n+\t\"Sets the minimum tuples produced by sequantial scans which plans will be logged\",\ns/sequantial/sequential/\n\n-\tes->analyze = (queryDesc->instrument_options && auto_explain_log_analyze);\n+\tes->analyze = (queryDesc->instrument_options && (auto_explain_log_analyze || auto_explain_log_seqscan != -1));\nTurning on ANALYZE when log_analyze isn't set to True is a no-no IMO.\n\n+ * Colllect relations where log_seqscan limit was exceeded\ns/Colllect/Collect/\n\n+\tif (*relnames.data != '\\0')\n+\t\tappendStringInfoString(&relnames, \",\");\nThis should use appendStringInfoChar instead.\n\n+\t(errmsg(\"duration: %.3f ms, over limit seqscans: %s, plan:\\n%s\",\nThe \"over limit\" part is superfluous since it otherwise wouldn't be logged. If\nwe're prefixing something wouldn't it be more helpful to include the limit,\nlike: \"seqscans >= %d tuples returned:\". I'm not a fan of \"seqscans\" but\nspelling it out is also quite verbose and this is grep-able.\n\nDocumentation and tests are also missing\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 6 Jul 2021 16:07:33 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal - log_full_scan" }, { "msg_contents": "Hi\n\nút 6. 7. 2021 v 16:07 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:\n\n> Looking at this I like the idea in principle, but I'm not convinced that\n> auto_explain is the right tool for this. auto_explain is for identifying\n> slow\n> queries, and what you are proposing is to identify queries with a certain\n> \"shape\" (for lack of a better term) even if they aren't slow as per the\n> log_min_duration setting. If log_min_duration is deemed to crude due to\n> query\n> volume then sample_rate is the tool. If sample_rate is also discarded,\n> then\n> pg_stat_statements seems a better option.\n>\n\nI don't think so pg_stat_statements can be used - a) it doesn't check\nexecution plan, so this feature can have big overhead against current\npg_stat_statements, that works just with AST, b) pg_stat_statements has one\nentry per AST - but this can be problem on execution plan level, and this\nis out of perspective of pg_stat_statements.\n\n>\n> Also, why just sequential scans (apart from it being this specific\n> usecase)?\n> If the idea is to track aspects of execution which are deemed slow, then\n> tracking for example spills etc would be just as valid. Do you have\n> thoughts\n> on that?\n>\n\nYes, I thought about it more, and sometimes bitmap index scans are\nproblematic too, index scans in nested loops can be a problem too.\n\nFor my last customer I had to detect queries with a large bitmap index\nscan. I can do it with a combination of pg_stat_statements and log\nchecking, but this work is not very friendly.\n\nMy current idea is to have some extension that can be tran for generally\nspecified executor nodes.\n\nSometimes I can say - I need to know all queries that does seq scan over\ntabx where tuples processed > N. In other cases can be interesting to know\nthe queries that uses index x for bitmap index scan,\n\n\n>\n> That being said, a few comments on the patch:\n>\n> - (auto_explain_log_min_duration >= 0 && \\\n> + ((auto_explain_log_min_duration >= 0 || auto_explain_log_seqscan\n> != -1) && \\\n> Is there a reason to not follow the existing code and check for >= 0?\n>\n> + DefineCustomIntVariable(\"auto_explain.log_seqscan\",\n> It's only a matter of time before another node is proposed for logging, and\n> then we'll be stuck adding log_XXXnode GUCs. Is there a more future-proof\n> way\n> to do this?\n>\n> + \"Sets the minimum tuples produced by sequantial scans which plans\n> will be logged\",\n> s/sequantial/sequential/\n>\n> - es->analyze = (queryDesc->instrument_options &&\n> auto_explain_log_analyze);\n> + es->analyze = (queryDesc->instrument_options &&\n> (auto_explain_log_analyze || auto_explain_log_seqscan != -1));\n> Turning on ANALYZE when log_analyze isn't set to True is a no-no IMO.\n>\n> + * Colllect relations where log_seqscan limit was exceeded\n> s/Colllect/Collect/\n>\n> + if (*relnames.data != '\\0')\n> + appendStringInfoString(&relnames, \",\");\n> This should use appendStringInfoChar instead.\n>\n> + (errmsg(\"duration: %.3f ms, over limit seqscans: %s, plan:\\n%s\",\n> The \"over limit\" part is superfluous since it otherwise wouldn't be\n> logged. If\n> we're prefixing something wouldn't it be more helpful to include the limit,\n> like: \"seqscans >= %d tuples returned:\". I'm not a fan of \"seqscans\" but\n> spelling it out is also quite verbose and this is grep-able.\n>\n> Documentation and tests are also missing\n>\n\nUnfortunately, this idea is not well prepared. My patch was a proof concept\n- but I think so it is not a good start point. Maybe it needs some tracing\nAPI on executor level and some tool like \"perf top\", but for executor. Post\nexecution analysis is not a good direction with big overhead, and mainly it\nis not friendly in critical situations. I need some handy tool like perf,\nbut for executor nodes. I don't know how to do it effectively.\n\nThank you for your review and for your time, but I think it is better to\nremove this patch from commit fest. I have no idea how to design this\nfeature well :-/\n\nRegards\n\nPavel\n\n\n\n\n>\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n>\n\nHiút 6. 7. 2021 v 16:07 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:Looking at this I like the idea in principle, but I'm not convinced that\nauto_explain is the right tool for this.  auto_explain is for identifying slow\nqueries, and what you are proposing is to identify queries with a certain\n\"shape\" (for lack of a better term) even if they aren't slow as per the\nlog_min_duration setting.  If log_min_duration is deemed to crude due to query\nvolume then sample_rate is the tool.  If sample_rate is also discarded, then\npg_stat_statements seems a better option.I don't think so pg_stat_statements can be used - a) it doesn't check execution plan, so this feature can have big overhead against current pg_stat_statements, that works just with AST, b) pg_stat_statements has one entry per AST - but this can be problem on execution plan level, and this is out of perspective of pg_stat_statements. \n\nAlso, why just sequential scans (apart from it being this specific usecase)?\nIf the idea is to track aspects of execution which are deemed slow, then\ntracking for example spills etc would be just as valid.  Do you have thoughts\non that?Yes, I thought about it more, and sometimes bitmap index scans are problematic too, index scans in nested loops can be a problem too.For my last customer I had to detect queries with a large bitmap index scan. I can do it with a combination of pg_stat_statements and log checking, but this work is not very friendly.My current idea is to have some extension that can be tran for generally specified executor nodes.Sometimes I can say - I need to know all queries that does seq scan over tabx where tuples processed > N. In other cases can be interesting to know the queries that uses index x for bitmap index scan,  \n\nThat being said, a few comments on the patch:\n\n-       (auto_explain_log_min_duration >= 0 && \\\n+       ((auto_explain_log_min_duration >= 0 || auto_explain_log_seqscan != -1) && \\\nIs there a reason to not follow the existing code and check for >= 0?\n\n+       DefineCustomIntVariable(\"auto_explain.log_seqscan\",\nIt's only a matter of time before another node is proposed for logging, and\nthen we'll be stuck adding log_XXXnode GUCs.  Is there a more future-proof way\nto do this?\n\n+       \"Sets the minimum tuples produced by sequantial scans which plans will be logged\",\ns/sequantial/sequential/\n\n-       es->analyze = (queryDesc->instrument_options && auto_explain_log_analyze);\n+       es->analyze = (queryDesc->instrument_options && (auto_explain_log_analyze || auto_explain_log_seqscan != -1));\nTurning on ANALYZE when log_analyze isn't set to True is a no-no IMO.\n\n+ * Colllect relations where log_seqscan limit was exceeded\ns/Colllect/Collect/\n\n+       if (*relnames.data != '\\0')\n+               appendStringInfoString(&relnames, \",\");\nThis should use appendStringInfoChar instead.\n\n+       (errmsg(\"duration: %.3f ms, over limit seqscans: %s, plan:\\n%s\",\nThe \"over limit\" part is superfluous since it otherwise wouldn't be logged.  If\nwe're prefixing something wouldn't it be more helpful to include the limit,\nlike: \"seqscans >= %d tuples returned:\".  I'm not a fan of \"seqscans\" but\nspelling it out is also quite verbose and this is grep-able.\n\nDocumentation and tests are also missingUnfortunately, this idea is not well prepared. My patch was a proof concept - but I think so it is not a good start point. Maybe it needs some tracing API on executor level and some tool like \"perf top\", but for executor. Post execution analysis is not a good direction with big overhead, and mainly it is not friendly in critical situations. I need some handy tool like perf, but for executor nodes. I don't know how to do it effectively.Thank you for your review and for your time, but I think it is better to remove this patch from commit fest. I have no idea how to design this feature well :-/RegardsPavel  \n\n--\nDaniel Gustafsson               https://vmware.com/", "msg_date": "Tue, 6 Jul 2021 18:14:23 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - log_full_scan" }, { "msg_contents": "> On 6 Jul 2021, at 18:14, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> I thought about it more, and sometimes bitmap index scans are problematic too, index scans in nested loops can be a problem too.\n\nRight. Depending on the circumstances, pretty much anything in a plan can be\nsomething deemed problematic in some production setting.\n\n> Unfortunately, this idea is not well prepared. My patch was a proof concept - but I think so it is not a good start point. Maybe it needs some tracing API on executor level and some tool like \"perf top\", but for executor. Post execution analysis is not a good direction with big overhead, and mainly it is not friendly in critical situations. I need some handy tool like perf, but for executor nodes. I don't know how to do it effectively.\n\nThese are hot codepaths so adding tracing instrumentation which collects enough\ninformation to be useful, and which can be drained fast enough to not be a\nresource problem is tricky.\n\n> Thank you for your review and for your time, but I think it is better to remove this patch from commit fest. I have no idea how to design this feature well :-/\n\nNo worries, I hope we see an updated approach at some time. In the meantime\nI'm marking this patch Returned with Feedback.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 6 Jul 2021 21:08:53 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal - log_full_scan" } ]
[ { "msg_contents": "Hello,\n\nwhen creating an event trigger for ddl_command_end that calls\npg_event_trigger_ddl_commands certain statements will cause the\ntrigger to fail with a cache lookup error. The error happens on\nmaster, 13 and 12 I didnt test any previous versions.\n\ntrg=# ALTER TABLE t ALTER COLUMN f1 SET DATA TYPE bigint, ALTER COLUMN\nf1 DROP IDENTITY;\nERROR: XX000: cache lookup failed for relation 13476892\nCONTEXT: PL/pgSQL function ddl_end() line 5 at FOR over SELECT rows\nLOCATION: getRelationTypeDescription, objectaddress.c:4178\n\nFor the ALTER DATA TYPE we create a command to adjust the sequence\nwhich gets recorded in the event trigger commandlist, which leads to\nthe described failure when the sequence is dropped as part of another\nALTER TABLE subcommand and information about the sequence can no\nlonger be looked up.\n\nTo reproduce:\nCREATE OR REPLACE FUNCTION ddl_end()\nRETURNS event_trigger AS $$\nDECLARE\nr RECORD;\nBEGIN\nFOR r IN SELECT * FROM pg_event_trigger_ddl_commands()\nLOOP\nRAISE NOTICE 'ddl_end: % %', r.command_tag, r.object_type;\nEND LOOP;\nEND;\n$$ LANGUAGE plpgsql;\n\nCREATE EVENT TRIGGER ddl_end ON ddl_command_end EXECUTE PROCEDURE ddl_end();\n\nCREATE TABLE t(f1 int NOT NULL GENERATED ALWAYS AS IDENTITY);\nALTER TABLE t ALTER COLUMN f1 DROP IDENTITY, ALTER COLUMN f1 SET DATA\nTYPE bigint;\n\nI tried really hard to look for a different way to detect this error\nearlier but since the subcommands are processed independently i\ncouldnt come up with a non-invasive version. Someone more familiar\nwith this code might have an idea for a better solution.\n\nAny thoughts?\n\nhttps://www.postgresql.org/message-id/CAMCrgp39V7JQA_Gc+JaEZV3ALOU1ZG=Pwyk3oDpTq7F6Z0JSmg@mail.gmail.com\n--\nRegards, Sven Klemm", "msg_date": "Sun, 18 Apr 2021 14:12:45 +0200", "msg_from": "Sven Klemm <sven@timescale.com>", "msg_from_op": true, "msg_subject": "Fix dropped object handling in pg_event_trigger_ddl_commands" }, { "msg_contents": "On Sun, Apr 18, 2021 at 2:12 PM Sven Klemm <sven@timescale.com> wrote:\n> when creating an event trigger for ddl_command_end that calls\n> pg_event_trigger_ddl_commands certain statements will cause the\n> trigger to fail with a cache lookup error. The error happens on\n> master, 13 and 12 I didnt test any previous versions.\n>\n> trg=# ALTER TABLE t ALTER COLUMN f1 SET DATA TYPE bigint, ALTER COLUMN\n> f1 DROP IDENTITY;\n> ERROR: XX000: cache lookup failed for relation 13476892\n> CONTEXT: PL/pgSQL function ddl_end() line 5 at FOR over SELECT rows\n> LOCATION: getRelationTypeDescription, objectaddress.c:4178\n\nAny opinions on the patch? Is this not worth the effort to fix or is\nthere a better way to fix this?\n\nhttps://www.postgresql.org/message-id/CAMCrgp2R1cEXU53iYKtW6yVEp2_yKUz+z=3-CTrYpPP+xryRtg@mail.gmail.com\n\n-- \nRegards, Sven Klemm\n\n\n", "msg_date": "Sun, 25 Apr 2021 12:20:06 +0200", "msg_from": "Sven Klemm <sven@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Fix dropped object handling in pg_event_trigger_ddl_commands" }, { "msg_contents": "On 2021-Apr-25, Sven Klemm wrote:\n\n> On Sun, Apr 18, 2021 at 2:12 PM Sven Klemm <sven@timescale.com> wrote:\n> > when creating an event trigger for ddl_command_end that calls\n> > pg_event_trigger_ddl_commands certain statements will cause the\n> > trigger to fail with a cache lookup error. The error happens on\n> > master, 13 and 12 I didnt test any previous versions.\n> >\n> > trg=# ALTER TABLE t ALTER COLUMN f1 SET DATA TYPE bigint, ALTER COLUMN\n> > f1 DROP IDENTITY;\n> > ERROR: XX000: cache lookup failed for relation 13476892\n> > CONTEXT: PL/pgSQL function ddl_end() line 5 at FOR over SELECT rows\n> > LOCATION: getRelationTypeDescription, objectaddress.c:4178\n> \n> Any opinions on the patch? Is this not worth the effort to fix or is\n> there a better way to fix this?\n\nHello, I haven't looked at this but judging from the general shape of\nfunction and error message, it seems clearly a bug that needs to be\nfixed somehow. I'll try to make time to look at it sometime soon, but I\nhave other bugs to investigate and fix, so it may be some time.\n\nI fear your proposal of ignoring the object may be the best we can do,\nbut I don't like it much.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"La verdad no siempre es bonita, pero el hambre de ella s�\"\n\n\n", "msg_date": "Sun, 25 Apr 2021 15:40:40 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Fix dropped object handling in pg_event_trigger_ddl_commands" }, { "msg_contents": "Hi hackers,\n\n> Any opinions on the patch? Is this not worth the effort to fix or is\n> there a better way to fix this?\n\nI confirm that the bug still exists in master (be57f216). The patch\nfixes it and looks good to me. I changed the wording a little and\nadded a regression test. The updated patch is in the attachment. I\nadded this thread to the CF and updated the status to \"Ready for\nCommitter\".\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 7 Jun 2021 12:44:42 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Fix dropped object handling in pg_event_trigger_ddl_commands" }, { "msg_contents": "On Mon, Jun 07, 2021 at 12:44:42PM +0300, Aleksander Alekseev wrote:\n> I confirm that the bug still exists in master (be57f216). The patch\n> fixes it and looks good to me. I changed the wording a little and\n> added a regression test. The updated patch is in the attachment. I\n> added this thread to the CF and updated the status to \"Ready for\n> Committer\".\n\nFWIW, that looks rather natural to me to me to just ignore the object\nif it has already been dropped here. The same kind of rules apply to\ntables dropped with DROP TABLE which would not show up as of\npg_event_trigger_ddl_commands(), but one can get a list as of\npg_event_trigger_dropped_objects().\n\nAlvaro, were you planning to look at that? I have not looked at the\npatch in details. missing_ok is available in getObjectIdentity() only\nsince v14, so this cannot be backpatched :/\n--\nMichael", "msg_date": "Wed, 9 Jun 2021 14:29:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix dropped object handling in pg_event_trigger_ddl_commands" }, { "msg_contents": "On 2021-Jun-09, Michael Paquier wrote:\n\n> On Mon, Jun 07, 2021 at 12:44:42PM +0300, Aleksander Alekseev wrote:\n> > I confirm that the bug still exists in master (be57f216). The patch\n> > fixes it and looks good to me. I changed the wording a little and\n> > added a regression test. The updated patch is in the attachment. I\n> > added this thread to the CF and updated the status to \"Ready for\n> > Committer\".\n> \n> FWIW, that looks rather natural to me to me to just ignore the object\n> if it has already been dropped here. The same kind of rules apply to\n> tables dropped with DROP TABLE which would not show up as of\n> pg_event_trigger_ddl_commands(), but one can get a list as of\n> pg_event_trigger_dropped_objects().\n\nOh, that parallel didn't occur to me. I agree it seems a useful\nprecedent.\n\n> Alvaro, were you planning to look at that? I have not looked at the\n> patch in details. \n\nI have it on my list of things to look at, but it isn't priority. If\nyou to mess with it, please be my guest.\n\n> missing_ok is available in getObjectIdentity() only\n> since v14, so this cannot be backpatched :/\n\nOoh, yeah, I forgot about that. And that one was pretty invasive ...\n\nI'm not sure if we can reasonably implement a fix for older releases.\nI mean, it's a relatively easy test: do a syscache search for the object\nor a catalog indexscan (easy to do with get_object_property_data-based\nAPI), and if the object is gone, skip getObjectTypeDescription and\ngetObjectIdentity. But maybe this is too much code to add to stable\nreleases ...\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Wed, 9 Jun 2021 09:55:08 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Fix dropped object handling in pg_event_trigger_ddl_commands" }, { "msg_contents": "On Wed, Jun 09, 2021 at 09:55:08AM -0400, Alvaro Herrera wrote:\n> I'm not sure if we can reasonably implement a fix for older releases.\n> I mean, it's a relatively easy test: do a syscache search for the object\n> or a catalog indexscan (easy to do with get_object_property_data-based\n> API), and if the object is gone, skip getObjectTypeDescription and\n> getObjectIdentity. But maybe this is too much code to add to stable\n> releases ...\n\nExcept that these syscache lookups need to be done on an object-type\nbasis, which is basically what getObjectDescription() & friends now do\nwhere the logic makes sure that we have a correct objectId <-> cacheId\nmapping for the syscache lookups. So that would be roughly copying\ninto event_trigger.c what objectaddress.c does now, but for the back\nbranches. It would be better to just backport the changes to support\nmissing_ok in objectaddress.c if we go down this road, but the\ninvasiveness makes that much more complicated.\n\nAnother thing is that ATExecDropIdentity() does performDeletion() by\nusing PERFORM_DELETION_INTERNAL, which does not feed the dropped\nsequence to pg_event_trigger_dropped_objects(). That feels\ninconsistent with CREATE TABLE GENERATED that would feed to event\ntriggers the CREATE SEQUENCE and ALTER SEQUENCE commands, as well as\nALTER TABLE SET DATA TYPE on a generated column that feeds an internal\nALTER SEQUENCE.\n\nAn extra idea we could consider, as the drop events are logged before\nthe other ALTER TABLE subcommands, is to look at the event triggers\nregistered when the generated sequence is dropped and to erase from\nexistence any previous events associated to it, but that would make\nthe logic weak as hell. In all this stuff, simply ignoring the\nobjects still sounds like a fair and simple course of action to take\nif they are gone at the time an event trigger checks after them within\npg_event_trigger_ddl_commands(). Now, this approach makes my spider\nsense tingle.\n--\nMichael", "msg_date": "Thu, 10 Jun 2021 17:07:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix dropped object handling in pg_event_trigger_ddl_commands" }, { "msg_contents": "On Thu, Jun 10, 2021 at 05:07:28PM +0900, Michael Paquier wrote:\n> Except that these syscache lookups need to be done on an object-type\n> basis, which is basically what getObjectDescription() & friends now do\n> where the logic makes sure that we have a correct objectId <-> cacheId\n> mapping for the syscache lookups. So that would be roughly copying\n> into event_trigger.c what objectaddress.c does now, but for the back\n> branches. It would be better to just backport the changes to support\n> missing_ok in objectaddress.c if we go down this road, but the\n> invasiveness makes that much more complicated.\n\nI have been looking at that more this morning, and I have convinced\nmyself that skipping objects should work fine. The test added at the\nbottom of event_trigger.sql was making the file a bit messier though,\nand there are already tests for relations when it comes to dropped\nobjects. So let's do a bit of consolidation while on it with an extra\nevent trigger on ddl_command_end and relations on the schema evttrig.\n\nThis one already included some cases for serial columns, so that's\nnatural to me to extend the area for identity columns. I have also\nadded a case for a serial column dropped, while on it. The last thing\nis the addition of r.object_identity from\npg_event_trigger_ddl_commands() in the data generated for the output\nmessages, so as the output is as complete as possible.\n\nRegarding the back-branches, I am tempted to do nothing. The APIs are\njust not here to do the job. On top of being an invasive change, it\ntook 4 years for somebody to complain on this matter, as this exists\nsince 10. That's not worth the risk/cost.\n--\nMichael", "msg_date": "Fri, 11 Jun 2021 12:46:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix dropped object handling in pg_event_trigger_ddl_commands" }, { "msg_contents": "Hi Michael,\n\n> /* The type can never be NULL */\n> type = getObjectTypeDescription(&addr, true);\n\nThe last argument should be `false` then.\n\n\n--\nBest regards,\nAleksander Alekseev", "msg_date": "Fri, 11 Jun 2021 11:00:40 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Fix dropped object handling in pg_event_trigger_ddl_commands" }, { "msg_contents": "On Fri, Jun 11, 2021 at 11:00:40AM +0300, Aleksander Alekseev wrote:\n> The last argument should be `false` then.\n\nHm, nope. I think that we had better pass true as argument here.\n\nFirst, this is more consistent with the identity lookup (OK, it does\nnot matter as we would have discarded the object after the identity\nlookup anyway, but any future shuffling of this code may not be that\nwise). Second, now that I look at it, getObjectTypeDescription() can\nnever be NULL as we have fallback names for relations, routines and\nconstraints for all object types so the buffer will be filled with\nsome data. Let's replace the bottom of getObjectTypeDescription()\nthat returns now NULL by Assert(buffer.len > 0). This code is new as\nof v14, so it is better to adjust that sooner than later.\n--\nMichael", "msg_date": "Fri, 11 Jun 2021 21:36:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix dropped object handling in pg_event_trigger_ddl_commands" }, { "msg_contents": "On Fri, Jun 11, 2021 at 09:36:57PM +0900, Michael Paquier wrote:\n> Hm, nope. I think that we had better pass true as argument here.\n\nThe main patch has been applied as of 2d689ba.\n\n> First, this is more consistent with the identity lookup (OK, it does\n> not matter as we would have discarded the object after the identity\n> lookup anyway, but any future shuffling of this code may not be that\n> wise). Second, now that I look at it, getObjectTypeDescription() can\n> never be NULL as we have fallback names for relations, routines and\n> constraints for all object types so the buffer will be filled with\n> some data. Let's replace the bottom of getObjectTypeDescription()\n> that returns now NULL by Assert(buffer.len > 0). This code is new as\n> of v14, so it is better to adjust that sooner than later.\n\nAnd this has been simplified with b56b83a.\n--\nMichael", "msg_date": "Mon, 14 Jun 2021 15:48:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix dropped object handling in pg_event_trigger_ddl_commands" } ]
[ { "msg_contents": "Hi:\n\nI would talk about the impact of init partition prune for\nset_append_rel_size.\nand create_append_path. Finally I just want to focus on set_append_rel_size\nonly in this thread.\n\nGiven the below example:\n\nCREATE TABLE P (part_key int, v int) PARTITION BY RANGE (part_key);\nCREATE TABLE p_1 PARTITION OF p FOR VALUES FROM (0) TO (10);\nCREATE TABLE p_2 PARTITION OF p FOR VALUES FROM (10) TO (20);\nCREATE TABLE p_3 PARTITION OF p FOR VALUES FROM (20) TO (30);\nINSERT INTO p SELECT i % 30, i FROM generate_series(1, 300)i;\n\nset plan_cache_mode to force_generic_plan ;\nprepare s as select * from p where part_key = $1;\nexplain analyze execute s(2);\n\nThen we will get estimated RelOptInfo.rows = 30, but actually it is 10 rows.\n\nexplain analyze execute s(2);\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------\n Append (cost=0.00..6.90 rows=30 width=8) (actual time=0.019..0.042\nrows=10 loops=1)\n Subplans Removed: 2\n -> Seq Scan on p_1 (cost=0.00..2.25 rows=10 width=8) (actual\ntime=0.017..0.038 rows=10 loops=1)\n Filter: (part_key = $1)\n Rows Removed by Filter: 90\n Planning Time: 0.885 ms\n Execution Time: 0.156 ms\n(7 rows)\n\nActually there are 2 issues here. one is RelOptInfo->rows which is set by\nset_append_rel_size, the other one appendPath->path.rows is set at\ncreate_append_path. They are two independent data. (When we estimate\nthe rows of a joinrel, we only consider the RelOptInfo.rows rather than\nPath.rows).\n\nIn set_append_rel_size, it pushes the quals to each child relation and does\na sum of\neach child->rows. child's stats works better than parent stats if we know\nexactly which\npartitions we would access. But this strategy fails when init prune comes as\nabove.\n\nSo I think considering parent's stats for init prune case might be a good\nsolution (Ashutosh has mentioned global stats for this a long time\nago[1]). So I want\nto refactor the code like this:\n\na). should_use_parent_stats(..); Decides which stats we should use for an\nAppendRel.\nb). set_append_rel_size_locally: Just do what we currently do.\nc). set_append_rel_size_globally: We calculate the quals selectivity on\nAppendRel level, and set the rows with AppendRel->tuples * sel.\n\nMore about should_use_parent_stats function:\n1. If there are no quals for initial partition prune, we use child's stats.\n2. If we have quals for initial partition prune, and the left op is not\nused in\n planning time prune, we use parent's stats. For example: (part_key = 2\nand\n part_key > $1);\n\nHowever when I was coding it, I found out that finding \"quals for initial\npartition prune\"\nis not so easy. So I doubt if we need the troubles to decide which method\nto use. Attached is just the PoC version which will use parent's stats\nall the time.\n\nAuthor: 一挃 <yizhi.fzh@alibaba-inc.com>\nDate: Sun Apr 18 22:02:54 2021 +0800\n\n Currently the set_append_rel_size doesn't consider the init partition\n\n prune, so the estimated size may be wrong at a big scale sometimes.\n In this patch I used the set the rows = parentrel->tuples *\n clauseselecitivty. In this case we can loss some accuracy when the\ninitial\n partition prune doesn't happen at all. but generally I think it would\nbe OK.\n\n Another strategy is we should check if init partition prune can happen.\n if we are sure about that, we adapt the above way. or else we can use\n the local stats strategy still.\n\n[1]\nhttps://www.postgresql.org/message-id/CAExHW5t5Q7JuUW28QMRO7szuHcbsfx4M9%3DWL%2Bup40h3PCd7dXw%40mail.gmail.com\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)", "msg_date": "Sun, 18 Apr 2021 22:39:18 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Consider parent's stats for set_append_rel_size." } ]
[ { "msg_contents": "Hi,\n\nI'm surprised that the following expression is false:\n\nselect to_tsvector('english', 'aaa: bbb') @@\nwebsearch_to_tsquery('english', '\"aaa: bbb\"');\n ?column?\n----------\n f\n(1 row)\n\nMy expectation is that to_tsvector('english', text) @@\nwebsearch_to_tsquery('english', '\" || text || \"') would be true for\nall texts, or pretty close to all texts. Otherwise it makes search\nrather unpredictable. The actual example that started this\ninvestigation was searching for '\"/path/to/some/exe: no such file or\ndirectory\"' (which was failing to find the exact matches that I knew\nexisted).\n\nLooking at the tsvector and tsquery, we can see that the problem is\nthat the \":\" counts as one position for the ts_query but not the\nts_vector:\n\nselect to_tsvector('english', 'aaa: bbb'), websearch_to_tsquery('english',\n'\"aaa: bbb\"');\n to_tsvector | websearch_to_tsquery\n-----------------+----------------------\n 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'\n(1 row)\n\nSo I wondered: are there more such cases? Looking at all texts of the\nform 'aaa' || maybe-space || one-byte || maybe-space || 'bbb', it\nhappens quite a bit:\n\nselect text, ts_vector, ts_query, matches from unnest(array['', ' ']) as\nprefix, unnest(array['', ' ']) as suffix, (select chr(a) as char from\ngenerate_series(1,192) as s(a)) as zz1, lateral (select 'aaa' || prefix ||\nchar || suffix || 'bbb' as text) as zz2, lateral (select\nto_tsvector('english', text) as ts_vector) as zz3, lateral (select\nwebsearch_to_tsquery('english', '\"' || text || '\"') as ts_query) as zz4,\nlateral (select ts_vector @@ ts_query as matches) as zz5 where not matches;\n text | ts_vector | ts_query | matches\n----------------+-----------------+------------------+---------\n aaa \\x01 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\x02 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\x03 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\x04 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\x05 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\x06 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\x07 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\x08 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\x0E bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\x0F bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\x10 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\x11 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\x12 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\x13 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\x14 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\x15 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\x16 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\x17 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\x18 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\x19 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\x1A bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\x1B bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\x1C bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\x1D bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\x1E bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\x1F bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa # bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa $ bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa % bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ' bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa * bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa + bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa , bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa . bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa / bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa: bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa : bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ; bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa = bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa > bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ? bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa @ bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa [ bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\ bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ] bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ^ bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa _ bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ` bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa { bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa } bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ~bbb | 'aaa':1 'bbb':2 | 'aaa' <-> '~bbb' | f\n aaa ~ bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\x7F bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u0080 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u0081 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u0082 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u0083 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u0084 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u0085 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u0086 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u0087 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u0088 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u0089 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u008A bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u008B bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u008C bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u008D bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u008E bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u008F bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u0090 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u0091 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u0092 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u0093 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u0094 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u0095 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u0096 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u0097 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u0098 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u0099 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u009A bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u009B bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u009C bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u009D bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u009E bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa \\u009F bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ¡ bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ¢ bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa £ bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ¤ bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ¥ bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ¦ bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa § bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ¨ bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa © bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa « bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ¬ bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ­ bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ® bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ¯ bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ° bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ± bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ² bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ³ bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ´ bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ¶ bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa · bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ¸ bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ¹ bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa » bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ¼ bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ½ bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ¾ bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n aaa ¿ bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb' | f\n(114 rows)\n\nThere is no obvious workaround either:\n\n- there's no function that converts a tsvector like 'aaa':1 'bbb':2\ninto a tsquery like 'aaa' <-> 'bbb', that one might be able to use to\nbuild a query with exactly the same normalization as tsvector.\n\n- replacing all problematic characters above by spaces seems to work\nfor most characters but not others, as for instance it fixes 'aaa\n. bbb' but breaks 'aaa.bbb'.\n\nselect version();\n version\n\n---------------------------------------------------------------------------------------------------------\n PostgreSQL 14devel on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu\n9.3.0-17ubuntu1~20.04) 9.3.0, 64-bit\n(1 row)\n\nHi,I'm surprised that the following expression is false:select to_tsvector('english', 'aaa: bbb') @@ websearch_to_tsquery('english', '\"aaa: bbb\"'); ?column? ---------- f(1 row)My expectation is that to_tsvector('english', text) @@websearch_to_tsquery('english', '\" || text || \"') would be true forall texts, or pretty close to all texts. Otherwise it makes searchrather unpredictable. The actual example that started thisinvestigation was searching for '\"/path/to/some/exe: no such file ordirectory\"' (which was failing to find the exact matches that I knewexisted).Looking at the tsvector and tsquery, we can see that the problem isthat the \":\" counts as one position for the ts_query but not thets_vector:select to_tsvector('english', 'aaa: bbb'), websearch_to_tsquery('english', '\"aaa: bbb\"');   to_tsvector   | websearch_to_tsquery -----------------+---------------------- 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'(1 row)So I wondered: are there more such cases? Looking at all texts of theform 'aaa' || maybe-space || one-byte || maybe-space || 'bbb', ithappens quite a bit:select text, ts_vector, ts_query, matches from unnest(array['', ' ']) as prefix, unnest(array['', ' ']) as suffix, (select chr(a) as char from generate_series(1,192) as s(a)) as zz1, lateral (select 'aaa' || prefix || char || suffix || 'bbb' as text) as zz2, lateral (select to_tsvector('english', text) as ts_vector) as zz3, lateral (select websearch_to_tsquery('english', '\"' || text || '\"') as ts_query) as zz4, lateral (select ts_vector @@ ts_query as matches) as zz5 where not matches;      text      |    ts_vector    |     ts_query     | matches ----------------+-----------------+------------------+--------- aaa \\x01 bbb   | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\x02 bbb   | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\x03 bbb   | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\x04 bbb   | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\x05 bbb   | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\x06 bbb   | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\x07 bbb   | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\x08 bbb   | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\x0E bbb   | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\x0F bbb   | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\x10 bbb   | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\x11 bbb   | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\x12 bbb   | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\x13 bbb   | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\x14 bbb   | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\x15 bbb   | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\x16 bbb   | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\x17 bbb   | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\x18 bbb   | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\x19 bbb   | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\x1A bbb   | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\x1B bbb   | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\x1C bbb   | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\x1D bbb   | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\x1E bbb   | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\x1F bbb   | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa # bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa $ bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa % bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ' bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa * bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa + bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa , bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa . bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa / bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa: bbb       | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa : bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ; bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa = bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa > bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ? bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa @ bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa [ bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\ bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ] bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ^ bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa _ bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ` bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa { bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa } bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ~bbb       | 'aaa':1 'bbb':2 | 'aaa' <-> '~bbb' | f aaa ~ bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\x7F bbb   | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u0080 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u0081 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u0082 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u0083 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u0084 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u0085 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u0086 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u0087 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u0088 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u0089 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u008A bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u008B bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u008C bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u008D bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u008E bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u008F bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u0090 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u0091 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u0092 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u0093 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u0094 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u0095 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u0096 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u0097 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u0098 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u0099 bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u009A bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u009B bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u009C bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u009D bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u009E bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa \\u009F bbb | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa   bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ¡ bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ¢ bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa £ bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ¤ bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ¥ bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ¦ bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa § bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ¨ bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa © bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa « bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ¬ bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ­ bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ® bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ¯ bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ° bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ± bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ² bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ³ bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ´ bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ¶ bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa · bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ¸ bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ¹ bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa » bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ¼ bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ½ bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ¾ bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f aaa ¿ bbb      | 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'  | f(114 rows)There is no obvious workaround either:- there's no function that converts a tsvector like 'aaa':1 'bbb':2into a tsquery like 'aaa' <-> 'bbb', that one might be able to use tobuild a query with exactly the same normalization as tsvector.- replacing all problematic characters above by spaces seems to workfor most characters but not others, as for instance it fixes 'aaa. bbb' but breaks 'aaa.bbb'.select version();                                                 version                                                 --------------------------------------------------------------------------------------------------------- PostgreSQL 14devel on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, 64-bit(1 row)", "msg_date": "Sun, 18 Apr 2021 10:53:36 -0400", "msg_from": "Valentin Gatien-Baron <valentin.gatienbaron@gmail.com>", "msg_from_op": true, "msg_subject": "websearch_to_tsquery() returns queries that don't match to_tsvector()" }, { "msg_contents": "Hi!\n\nOn Mon, Apr 19, 2021 at 9:57 AM Valentin Gatien-Baron\n<valentin.gatienbaron@gmail.com> wrote:\n> Looking at the tsvector and tsquery, we can see that the problem is\n> that the \":\" counts as one position for the ts_query but not the\n> ts_vector:\n>\n> select to_tsvector('english', 'aaa: bbb'), websearch_to_tsquery('english', '\"aaa: bbb\"');\n> to_tsvector | websearch_to_tsquery\n> -----------------+----------------------\n> 'aaa':1 'bbb':2 | 'aaa' <2> 'bbb'\n> (1 row)\n\nIt seems there is another bug with phrase search and query parsing.\nIt seems to me that since 0c4f355c6a websearch_to_tsquery() should\njust parse text in quotes as a single token. Besides fixing this bug,\nit simplifies the code.\n\nTrying to fix this bug before 0c4f355c6a doesn't seem to worth the efforts.\n\nI propose to push the attached patch to v14. Objections?\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Sun, 2 May 2021 20:45:18 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: websearch_to_tsquery() returns queries that don't match\n to_tsvector()" }, { "msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> It seems there is another bug with phrase search and query parsing.\n> It seems to me that since 0c4f355c6a websearch_to_tsquery() should\n> just parse text in quotes as a single token. Besides fixing this bug,\n> it simplifies the code.\n\nOK ...\n\n> Trying to fix this bug before 0c4f355c6a doesn't seem to worth the efforts.\n\nAgreed, plus it doesn't sound like the sort of behavior change that\nwe want to push out in minor releases.\n\n> I propose to push the attached patch to v14. Objections?\n\nThis patch seems to include some unrelated fooling around in GiST?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 02 May 2021 13:52:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: websearch_to_tsquery() returns queries that don't match\n to_tsvector()" }, { "msg_contents": "On Sun, May 2, 2021 at 8:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > It seems there is another bug with phrase search and query parsing.\n> > It seems to me that since 0c4f355c6a websearch_to_tsquery() should\n> > just parse text in quotes as a single token. Besides fixing this bug,\n> > it simplifies the code.\n>\n> OK ...\n>\n> > Trying to fix this bug before 0c4f355c6a doesn't seem to worth the efforts.\n>\n> Agreed, plus it doesn't sound like the sort of behavior change that\n> we want to push out in minor releases.\n\n+1\n\n> > I propose to push the attached patch to v14. Objections?\n>\n> This patch seems to include some unrelated fooling around in GiST?\n\nOoops, I've included this by oversight. The next revision is attached.\n\nAnything besides that?\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Sun, 2 May 2021 20:57:00 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: websearch_to_tsquery() returns queries that don't match\n to_tsvector()" }, { "msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> Ooops, I've included this by oversight. The next revision is attached.\n> Anything besides that?\n\nSome quick eyeball review:\n\n+ /* Everything is quotes is processed as a single token */\n\nShould read \"Everything in quotes ...\"\n\n- /* or else gettoken_tsvector() will raise an error */\n+ /* or else ƒtsvector() will raise an error */\n\nLooks like an unintentional change?\n\n@@ -846,7 +812,6 @@ parse_tsquery(char *buf,\n \tstate.buffer = buf;\n \tstate.buf = buf;\n \tstate.count = 0;\n-\tstate.in_quotes = false;\n \tstate.state = WAITFIRSTOPERAND;\n \tstate.polstr = NIL;\n\nThis change seems wrong/unsafe too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 02 May 2021 14:04:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: websearch_to_tsquery() returns queries that don't match\n to_tsvector()" }, { "msg_contents": "On Sun, May 2, 2021 at 10:57 AM Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> On Sun, May 2, 2021 at 8:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > > It seems there is another bug with phrase search and query parsing.\n> > > It seems to me that since 0c4f355c6a websearch_to_tsquery() should\n> > > just parse text in quotes as a single token. Besides fixing this bug,\n> > > it simplifies the code.\n> >\n> > OK ...\n> >\n> > > Trying to fix this bug before 0c4f355c6a doesn't seem to worth the\n> efforts.\n> >\n> > Agreed, plus it doesn't sound like the sort of behavior change that\n> > we want to push out in minor releases.\n>\n> +1\n>\n> > > I propose to push the attached patch to v14. Objections?\n> >\n> > This patch seems to include some unrelated fooling around in GiST?\n>\n> Ooops, I've included this by oversight. The next revision is attached.\n>\n> Anything besides that?\n>\n> ------\n> Regards,\n> Alexander Korotkov\n>\n\nHi,\n+ /* Everything is quotes is processed as a single token\n*/\n\nis quotes -> in quotes\n\n+ /* iterate to the closing quotes or end of the string*/\n\nclosing quotes -> closing quote\n\n+ /* or else ƒtsvector() will raise an error */\n\nThe character before tsvector() seems to be special.\n\nCheers\n\nOn Sun, May 2, 2021 at 10:57 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:On Sun, May 2, 2021 at 8:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > It seems there is another bug with phrase search and query parsing.\n> > It seems to me that since 0c4f355c6a websearch_to_tsquery() should\n> > just parse text in quotes as a single token.  Besides fixing this bug,\n> > it simplifies the code.\n>\n> OK ...\n>\n> > Trying to fix this bug before 0c4f355c6a doesn't seem to worth the efforts.\n>\n> Agreed, plus it doesn't sound like the sort of behavior change that\n> we want to push out in minor releases.\n\n+1\n\n> > I propose to push the attached patch to v14.  Objections?\n>\n> This patch seems to include some unrelated fooling around in GiST?\n\nOoops, I've included this by oversight.  The next revision is attached.\n\nAnything besides that?\n\n------\nRegards,\nAlexander KorotkovHi,+                   /* Everything is quotes is processed as a single token */is quotes -> in quotes +                   /* iterate to the closing quotes or end of the string*/closing quotes -> closing quote+                   /* or else ƒtsvector() will raise an error */The character before tsvector() seems to be special.Cheers", "msg_date": "Sun, 2 May 2021 11:09:49 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: websearch_to_tsquery() returns queries that don't match\n to_tsvector()" }, { "msg_contents": "On Sun, May 2, 2021 at 9:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > Ooops, I've included this by oversight. The next revision is attached.\n> > Anything besides that?\n>\n> Some quick eyeball review:\n>\n> + /* Everything is quotes is processed as a single token */\n>\n> Should read \"Everything in quotes ...\"\n>\n> - /* or else gettoken_tsvector() will raise an error */\n> + /* or else ƒtsvector() will raise an error */\n>\n> Looks like an unintentional change?\n\nThank you for catching this!\n\n> @@ -846,7 +812,6 @@ parse_tsquery(char *buf,\n> state.buffer = buf;\n> state.buf = buf;\n> state.count = 0;\n> - state.in_quotes = false;\n> state.state = WAITFIRSTOPERAND;\n> state.polstr = NIL;\n>\n> This change seems wrong/unsafe too.\n\nIt seems OK, because this patch removes in_quotes field altogether.\nWe don't have to know whether we in quotes in the state, since we\nprocess everything in quotes as a single token.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Sun, 2 May 2021 21:12:11 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: websearch_to_tsquery() returns queries that don't match\n to_tsvector()" }, { "msg_contents": "On Sun, May 2, 2021 at 9:06 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> + /* Everything is quotes is processed as a single token */\n>\n> is quotes -> in quotes\n>\n> + /* iterate to the closing quotes or end of the string*/\n>\n> closing quotes -> closing quote\n>\n> + /* or else ƒtsvector() will raise an error */\n>\n> The character before tsvector() seems to be special.\n\nThank you for catching. Fixed in v3.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Sun, 2 May 2021 21:12:44 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: websearch_to_tsquery() returns queries that don't match\n to_tsvector()" }, { "msg_contents": "On Sun, May 2, 2021 at 9:17 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> One minor comment:\n> + /* iterate to the closing quotes or end of the string*/\n>\n> closing quotes -> closing quote\n\nYep, I've missed the third place to change from plural to single form :)\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Sun, 2 May 2021 21:19:26 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: websearch_to_tsquery() returns queries that don't match\n to_tsvector()" }, { "msg_contents": "On Sun, May 2, 2021 at 11:12 AM Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> On Sun, May 2, 2021 at 9:06 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > + /* Everything is quotes is processed as a single\n> token */\n> >\n> > is quotes -> in quotes\n> >\n> > + /* iterate to the closing quotes or end of the\n> string*/\n> >\n> > closing quotes -> closing quote\n> >\n> > + /* or else ƒtsvector() will raise an error */\n> >\n> > The character before tsvector() seems to be special.\n>\n> Thank you for catching. Fixed in v3.\n>\n> ------\n> Regards,\n> Alexander Korotkov\n>\n\nHi,\nOne minor comment:\n+ /* iterate to the closing quotes or end of the string*/\n\nclosing quotes -> closing quote\n\nCheers\n\nOn Sun, May 2, 2021 at 11:12 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:On Sun, May 2, 2021 at 9:06 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> +                   /* Everything is quotes is processed as a single token */\n>\n> is quotes -> in quotes\n>\n> +                   /* iterate to the closing quotes or end of the string*/\n>\n> closing quotes -> closing quote\n>\n> +                   /* or else ƒtsvector() will raise an error */\n>\n> The character before tsvector() seems to be special.\n\nThank you for catching.  Fixed in v3.\n\n------\nRegards,\nAlexander KorotkovHi,One minor comment:+                   /* iterate to the closing quotes or end of the string*/closing quotes -> closing quoteCheers", "msg_date": "Sun, 2 May 2021 11:21:14 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: websearch_to_tsquery() returns queries that don't match\n to_tsvector()" }, { "msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> On Sun, May 2, 2021 at 9:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> - state.in_quotes = false;\n>> \n>> This change seems wrong/unsafe too.\n\n> It seems OK, because this patch removes in_quotes field altogether.\n\nOh, sorry, I misread the patch --- I thought that earlier hunk\nwas removing a local variable. Agreed, if you can do without this\nstate field altogether, that's fine.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 02 May 2021 14:37:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: websearch_to_tsquery() returns queries that don't match\n to_tsvector()" }, { "msg_contents": "On Sun, May 2, 2021 at 9:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > On Sun, May 2, 2021 at 9:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> - state.in_quotes = false;\n> >>\n> >> This change seems wrong/unsafe too.\n>\n> > It seems OK, because this patch removes in_quotes field altogether.\n>\n> Oh, sorry, I misread the patch --- I thought that earlier hunk\n> was removing a local variable. Agreed, if you can do without this\n> state field altogether, that's fine.\n\nOK, thank you for review!\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Sun, 2 May 2021 21:41:14 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: websearch_to_tsquery() returns queries that don't match\n to_tsvector()" } ]
[ { "msg_contents": "Hi:\n\nWe know volatile is very harmful for optimizers and it is the default\nvalue (and safest value) if the user doesn't provide that. Asking user\nto set the value is not a good experience, is it possible to auto-generate\nthe value for it rather than use the volatile directly for user defined\nfunction. I\nthink it should be possible, we just need to scan the PlpgSQL_stmt to see\nif there\nis a volatile function?\n\nThe second question \"It is v for “volatile” functions, whose results might\nchange at any time.\n(Use v also for functions with side-effects, so that calls to them cannot\nget optimized away.)\"\nI think they are different semantics. One of the results is volatile\nfunctions can't be removed\nby remove_unused_subquery_output even if it doesn't have side effects. for\nexample:\nselect b from (select an_expensive_random(), b from t); Is it by design\non purpose?\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nHi:We know volatile is very harmful for optimizers and it is the defaultvalue (and safest value) if the user doesn't provide that.  Asking userto set the value is not a good experience,  is it possible to auto-generatethe value for it rather than use the volatile directly for user defined function. Ithink it should be possible, we just need to scan the PlpgSQL_stmt to see if thereis a volatile function? The second question \"It is v for “volatile” functions, whose results might change at any time. (Use v also for functions with side-effects, so that calls to them cannot get optimized away.)\"I think they are different semantics.  One of the results is volatile functions can't be removed by remove_unused_subquery_output even if it doesn't have side effects. for example:select b from (select an_expensive_random(), b from t);   Is it by design on purpose? -- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Sun, 18 Apr 2021 23:06:15 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "2 questions about volatile attribute of pg_proc." }, { "msg_contents": "ne 18. 4. 2021 v 17:06 odesílatel Andy Fan <zhihui.fan1213@gmail.com>\nnapsal:\n\n> Hi:\n>\n> We know volatile is very harmful for optimizers and it is the default\n> value (and safest value) if the user doesn't provide that. Asking user\n> to set the value is not a good experience, is it possible to auto-generate\n> the value for it rather than use the volatile directly for user defined\n> function. I\n> think it should be possible, we just need to scan the PlpgSQL_stmt to see\n> if there\n> is a volatile function?\n>\n\nplpgsql_check does this check - the performance check check if function can\nbe marked as stable\n\nhttps://github.com/okbob/plpgsql_check\n\nI don't think so this can be done automatically - plpgsql does not check\nobjects inside in registration time. You can use objects and functions that\ndon't exist in CREATE FUNCTION time. And you need to know this info before\noptimization time. So if we implement this check automatically, then\nplanning time can be increased a lot.\n\nRegards\n\nPavel\n\n\n> The second question \"It is v for “volatile” functions, whose results might\n> change at any time.\n> (Use v also for functions with side-effects, so that calls to them cannot\n> get optimized away.)\"\n> I think they are different semantics. One of the results is volatile\n> functions can't be removed\n> by remove_unused_subquery_output even if it doesn't have side effects. for\n> example:\n> select b from (select an_expensive_random(), b from t); Is it by design\n> on purpose?\n>\n>\n> --\n> Best Regards\n> Andy Fan (https://www.aliyun.com/)\n>\n\nne 18. 4. 2021 v 17:06 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:Hi:We know volatile is very harmful for optimizers and it is the defaultvalue (and safest value) if the user doesn't provide that.  Asking userto set the value is not a good experience,  is it possible to auto-generatethe value for it rather than use the volatile directly for user defined function. Ithink it should be possible, we just need to scan the PlpgSQL_stmt to see if thereis a volatile function? plpgsql_check does this check - the performance check check if function can be marked as stablehttps://github.com/okbob/plpgsql_checkI don't think so this can be done automatically - plpgsql does not check objects inside in registration time. You can use objects and functions that don't exist in CREATE FUNCTION time. And you need to know this info before optimization time. So if we implement this check automatically, then planning time can be increased a lot.RegardsPavelThe second question \"It is v for “volatile” functions, whose results might change at any time. (Use v also for functions with side-effects, so that calls to them cannot get optimized away.)\"I think they are different semantics.  One of the results is volatile functions can't be removed by remove_unused_subquery_output even if it doesn't have side effects. for example:select b from (select an_expensive_random(), b from t);   Is it by design on purpose? -- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Sun, 18 Apr 2021 17:13:26 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 2 questions about volatile attribute of pg_proc." }, { "msg_contents": "Andy Fan <zhihui.fan1213@gmail.com> writes:\n> We know volatile is very harmful for optimizers and it is the default\n> value (and safest value) if the user doesn't provide that. Asking user\n> to set the value is not a good experience, is it possible to auto-generate\n> the value for it rather than use the volatile directly for user defined\n> function. I\n> think it should be possible, we just need to scan the PlpgSQL_stmt to see\n> if there\n> is a volatile function?\n\nAre you familiar with the halting problem? I don't see any meaningful\ndifference here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 18 Apr 2021 11:36:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 2 questions about volatile attribute of pg_proc." }, { "msg_contents": "On Sun, 18 Apr 2021 at 11:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andy Fan <zhihui.fan1213@gmail.com> writes:\n> > We know volatile is very harmful for optimizers and it is the default\n> > value (and safest value) if the user doesn't provide that. Asking user\n> > to set the value is not a good experience, is it possible to\n> auto-generate\n> > the value for it rather than use the volatile directly for user defined\n> > function. I\n> > think it should be possible, we just need to scan the PlpgSQL_stmt to see\n> > if there\n> > is a volatile function?\n>\n> Are you familiar with the halting problem? I don't see any meaningful\n> difference here.\n>\n\nI think what is being suggested is akin to type checking, not solving the\nhalting problem. Parse the function text, identify all functions it might\ncall (without solving the equivalent of the halting problem to see if it\nactually does or could), and apply the most volatile value of called\nfunctions to the calling function.\n\nThat being said, there are significant difficulties, including but almost\ncertainly not limited to:\n\n- what happens if one modifies a called function after creating the calling\nfunction?\n- EXECUTE\n- a PL/PGSQL function's meaning depends on the search path in effect when\nit is called, unless it has a SET search_path clause or it fully qualifies\nall object references, so it isn't actually possible in general to\ndetermine what a function calls at definition time\n\nIf the Haskell compiler is possible then what is being requested here is\nconceptually possible even if there are major issues with actually doing it\nin the Postgres context. The halting problem is not the problem here.\n\nOn Sun, 18 Apr 2021 at 11:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:Andy Fan <zhihui.fan1213@gmail.com> writes:\n> We know volatile is very harmful for optimizers and it is the default\n> value (and safest value) if the user doesn't provide that.  Asking user\n> to set the value is not a good experience,  is it possible to auto-generate\n> the value for it rather than use the volatile directly for user defined\n> function. I\n> think it should be possible, we just need to scan the PlpgSQL_stmt to see\n> if there\n> is a volatile function?\n\nAre you familiar with the halting problem?  I don't see any meaningful\ndifference here.I think what is being suggested is akin to type checking, not solving the halting problem. Parse the function text, identify all functions it might call (without solving the equivalent of the halting problem to see if it actually does or could), and apply the most volatile value of called functions to the calling function.That being said, there are significant difficulties, including but almost certainly not limited to:- what happens if one modifies a called function after creating the calling function?- EXECUTE- a PL/PGSQL function's meaning depends on the search path in effect when it is called, unless it has a SET search_path clause or it fully qualifies all object references, so it isn't actually possible in general to determine what a function calls at definition timeIf the Haskell compiler is possible then what is being requested here is conceptually possible even if there are major issues with actually doing it in the Postgres context. The halting problem is not the problem here.", "msg_date": "Sun, 18 Apr 2021 11:54:25 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 2 questions about volatile attribute of pg_proc." }, { "msg_contents": "Isaac Morland <isaac.morland@gmail.com> writes:\n> On Sun, 18 Apr 2021 at 11:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Are you familiar with the halting problem? I don't see any meaningful\n>> difference here.\n\n> I think what is being suggested is akin to type checking, not solving the\n> halting problem.\n\nYeah, on further thought we'd be satisfied with a conservative\napproximation, so that removes the theoretical-impossibility objection.\nStill, there are a lot of remaining problems, as you note.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 18 Apr 2021 12:08:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 2 questions about volatile attribute of pg_proc." }, { "msg_contents": "On Sun, Apr 18, 2021 at 9:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Isaac Morland <isaac.morland@gmail.com> writes:\n> > On Sun, 18 Apr 2021 at 11:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Are you familiar with the halting problem? I don't see any meaningful\n> >> difference here.\n>\n> > I think what is being suggested is akin to type checking, not solving the\n> > halting problem.\n>\n> Yeah, on further thought we'd be satisfied with a conservative\n> approximation, so that removes the theoretical-impossibility objection.\n> Still, there are a lot of remaining problems, as you note.\n>\n>\nYeah, the type checking approach seems blocked by the \"black box\" nature of\nfunctions. A possibly more promising approach is for the top-level call to\ndeclare its expectations (which are set by the user) and during execution\nif that expectation is violated directly, or is reported as violated deeper\nin the call stack, the execution of the function fails with some kind of\ninvalid state error. However, as with other suggestions of this nature,\nthe fundamental blocker here is that to be particularly useful this kind of\nvalidation needs to happen by default (as opposed to opt-in) which risks\nbreaking existing code. And so I foresee this request falling into the\nsame category as those others - an interesting idea that could probably be\nmade to work but by itself isn't worthwhile enough to go and introduce a\nfundamental change to the amount of \"parental oversight\" PostgreSQL tries\nto provide.\n\nDavid J.\n\nOn Sun, Apr 18, 2021 at 9:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Isaac Morland <isaac.morland@gmail.com> writes:\n> On Sun, 18 Apr 2021 at 11:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Are you familiar with the halting problem?  I don't see any meaningful\n>> difference here.\n\n> I think what is being suggested is akin to type checking, not solving the\n> halting problem.\n\nYeah, on further thought we'd be satisfied with a conservative\napproximation, so that removes the theoretical-impossibility objection.\nStill, there are a lot of remaining problems, as you note.Yeah, the type checking approach seems blocked by the \"black box\" nature of functions.  A possibly more promising approach is for the top-level call to declare its expectations (which are set by the user) and during execution if that expectation is violated directly, or is reported as violated deeper in the call stack, the execution of the function fails with some kind of invalid state error.  However, as with other suggestions of this nature, the fundamental blocker here is that to be particularly useful this kind of validation needs to happen by default (as opposed to opt-in) which risks breaking existing code.  And so I foresee this request falling into the same category as those others - an interesting idea that could probably be made to work but by itself isn't worthwhile enough to go and introduce a fundamental change to the amount of \"parental oversight\" PostgreSQL tries to provide.David J.", "msg_date": "Sun, 18 Apr 2021 09:27:01 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 2 questions about volatile attribute of pg_proc." }, { "msg_contents": "> - a PL/PGSQL function's meaning depends on the search path in effect when\nit is called, unless it has a SET search_path clause or it fully qualifies\nall object references, so it isn't actually possible in general to\ndetermine what a function calls at definition time\n\n\nI'd think this one as a blocker issue at the beginning since I have to\ninsist on\nany new features should not cause semantic changes for existing ones. Later\nI\nfound the new definition. As for this feature request, I think we can\ndefine the\nfeatures like this:\n\n1. We define a new attribute named VOLATILE_AUTO; The semantic is PG will\nauto\n detect the volatile info based on current search_path / existing\n function. If any embedded function can't be found, we can raise an error\nif\n VOLATILE_AUTO is used. If people change the volatile attribute later, we\ncan:\n a). do nothing. This can be the documented feature. or. b). Maintain the\n dependency tree between functions and if anyone is changed, other\nfunctions\n should be recalculated as well.\n\n2. VOLATILE_AUTO should never be the default value. It only works when\npeople\n requires it.\n\nThen what we can get from this? Thinking a user is migrating lots of UDF\nfrom\nother databases. Asking them to check/set each function's attribute might\nbe bad. However if we tell them about how VOLATILE_AUTO works, and they\naccept it (I guess most people would accept), then the migration would be\npretty productive.\n\nI'm listening to any obvious reason to reject it.\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\n> - a PL/PGSQL function's meaning depends on the search path in effect when it is called, unless it has a SET search_path clause or it fully qualifies all object references, so it isn't actually possible in general to determine what a function calls at definition timeI'd think this one as a blocker issue at the beginning since I have to insist onany new features should not cause semantic changes for existing ones. Later Ifound the new definition. As for this feature request, I think we can define thefeatures like this:1. We define a new attribute named VOLATILE_AUTO;  The semantic is PG will auto   detect the volatile info based on current search_path / existing   function. If any embedded function can't be found, we can raise an error if   VOLATILE_AUTO is used. If people change the volatile attribute later, we can:   a). do nothing. This can be the documented feature. or. b). Maintain the   dependency tree between functions and if anyone is changed, other functions   should be recalculated as well.2. VOLATILE_AUTO should never be the default value. It only works when people   requires it.Then what we can get from this?  Thinking a user is migrating lots of UDF fromother databases.  Asking them to check/set each function's attribute mightbe bad. However if we tell them about how VOLATILE_AUTO works, and theyaccept it (I guess most people would accept), then the migration would bepretty productive.I'm listening to any obvious reason to reject it.-- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Tue, 20 Apr 2021 10:47:10 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: 2 questions about volatile attribute of pg_proc." }, { "msg_contents": ">\n> I'm listening to any obvious reason to reject it.\n>\n> Any obvious reason to reject it because of it would be a lose battle for\nsure,\nso I would not waste time on it. Or vote up if you think it is possible and\nuseful.\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nI'm listening to any obvious reason to reject it.Any obvious reason to reject it because of it would be a lose battle for sure,so I would not waste time on it.  Or vote up if you think it is possible anduseful.  -- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Tue, 20 Apr 2021 10:55:53 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: 2 questions about volatile attribute of pg_proc." }, { "msg_contents": "út 20. 4. 2021 v 4:47 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:\n\n>\n>\n> > - a PL/PGSQL function's meaning depends on the search path in effect\n> when it is called, unless it has a SET search_path clause or it fully\n> qualifies all object references, so it isn't actually possible in general\n> to determine what a function calls at definition time\n>\n>\n> I'd think this one as a blocker issue at the beginning since I have to\n> insist on\n> any new features should not cause semantic changes for existing ones.\n> Later I\n> found the new definition. As for this feature request, I think we can\n> define the\n> features like this:\n>\n> 1. We define a new attribute named VOLATILE_AUTO; The semantic is PG will\n> auto\n> detect the volatile info based on current search_path / existing\n> function. If any embedded function can't be found, we can raise an\n> error if\n> VOLATILE_AUTO is used. If people change the volatile attribute later,\n> we can:\n> a). do nothing. This can be the documented feature. or. b). Maintain the\n> dependency tree between functions and if anyone is changed, other\n> functions\n> should be recalculated as well.\n>\n> 2. VOLATILE_AUTO should never be the default value. It only works when\n> people\n> requires it.\n>\n> Then what we can get from this? Thinking a user is migrating lots of UDF\n> from\n> other databases. Asking them to check/set each function's attribute might\n> be bad. However if we tell them about how VOLATILE_AUTO works, and they\n> accept it (I guess most people would accept), then the migration would be\n> pretty productive.\n>\n> I'm listening to any obvious reason to reject it.\n>\n\na) This analyses can be very slow - PLpgSQL does lazy planning - query\nplans are planned only when are required - and this feature requires\ncomplete planning current function and all nested VOLATILE_AUTO functions -\nso start of function can be significantly slower\n\nb) When you migrate from Oracle,then you can use the STABLE flag, and it\nwill be mostly correct.\n\nRegards\n\nPavel\n\n\n\n> --\n> Best Regards\n> Andy Fan (https://www.aliyun.com/)\n>\n\nút 20. 4. 2021 v 4:47 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:> - a PL/PGSQL function's meaning depends on the search path in effect when it is called, unless it has a SET search_path clause or it fully qualifies all object references, so it isn't actually possible in general to determine what a function calls at definition timeI'd think this one as a blocker issue at the beginning since I have to insist onany new features should not cause semantic changes for existing ones. Later Ifound the new definition. As for this feature request, I think we can define thefeatures like this:1. We define a new attribute named VOLATILE_AUTO;  The semantic is PG will auto   detect the volatile info based on current search_path / existing   function. If any embedded function can't be found, we can raise an error if   VOLATILE_AUTO is used. If people change the volatile attribute later, we can:   a). do nothing. This can be the documented feature. or. b). Maintain the   dependency tree between functions and if anyone is changed, other functions   should be recalculated as well.2. VOLATILE_AUTO should never be the default value. It only works when people   requires it.Then what we can get from this?  Thinking a user is migrating lots of UDF fromother databases.  Asking them to check/set each function's attribute mightbe bad. However if we tell them about how VOLATILE_AUTO works, and theyaccept it (I guess most people would accept), then the migration would bepretty productive.I'm listening to any obvious reason to reject it.a) This analyses can be very slow - PLpgSQL does lazy planning - query plans are planned only when are required - and this feature requires complete planning current function and all nested VOLATILE_AUTO functions - so start of function can be significantly slower b) When you migrate from Oracle,then you can use the STABLE flag, and it will be mostly correct. RegardsPavel-- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Tue, 20 Apr 2021 04:57:00 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 2 questions about volatile attribute of pg_proc." }, { "msg_contents": "On Tue, Apr 20, 2021 at 10:57 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> út 20. 4. 2021 v 4:47 odesílatel Andy Fan <zhihui.fan1213@gmail.com>\n> napsal:\n>\n>>\n>>\n>> > - a PL/PGSQL function's meaning depends on the search path in effect\n>> when it is called, unless it has a SET search_path clause or it fully\n>> qualifies all object references, so it isn't actually possible in general\n>> to determine what a function calls at definition time\n>>\n>>\n>> I'd think this one as a blocker issue at the beginning since I have to\n>> insist on\n>> any new features should not cause semantic changes for existing ones.\n>> Later I\n>> found the new definition. As for this feature request, I think we can\n>> define the\n>> features like this:\n>>\n>> 1. We define a new attribute named VOLATILE_AUTO; The semantic is PG\n>> will auto\n>> detect the volatile info based on current search_path / existing\n>> function. If any embedded function can't be found, we can raise an\n>> error if\n>> VOLATILE_AUTO is used. If people change the volatile attribute later,\n>> we can:\n>> a). do nothing. This can be the documented feature. or. b). Maintain\n>> the\n>> dependency tree between functions and if anyone is changed, other\n>> functions\n>> should be recalculated as well.\n>>\n>> 2. VOLATILE_AUTO should never be the default value. It only works when\n>> people\n>> requires it.\n>>\n>> Then what we can get from this? Thinking a user is migrating lots of UDF\n>> from\n>> other databases. Asking them to check/set each function's attribute might\n>> be bad. However if we tell them about how VOLATILE_AUTO works, and they\n>> accept it (I guess most people would accept), then the migration would be\n>> pretty productive.\n>>\n>> I'm listening to any obvious reason to reject it.\n>>\n>\n> a) This analyses can be very slow - PLpgSQL does lazy planning - query\n> plans are planned only when are required - and this feature requires\n> complete planning current function and all nested VOLATILE_AUTO functions -\n> so start of function can be significantly slower\n>\n\nActually I am thinking we can do this when we compile the function, which\nmeans that would\nhappen on the \"CREATE FUNCTION \" stage. this would need some hacks for\nsure. Does\nthis remove your concern?\n\n\n> b) When you migrate from Oracle,then you can use the STABLE flag, and it\n> will be mostly correct.\n>\n\nThis was suggested in our team as well, but I don't think it is very\nstrict. For example:\nSELECT materialize_bills_for(userId) from users; Any more proof to say\n\"STABLE\" flag\nis acceptable?\n\n\n\n> --\n>> Best Regards\n>> Andy Fan (https://www.aliyun.com/)\n>>\n>\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Tue, Apr 20, 2021 at 10:57 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:út 20. 4. 2021 v 4:47 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:> - a PL/PGSQL function's meaning depends on the search path in effect when it is called, unless it has a SET search_path clause or it fully qualifies all object references, so it isn't actually possible in general to determine what a function calls at definition timeI'd think this one as a blocker issue at the beginning since I have to insist onany new features should not cause semantic changes for existing ones. Later Ifound the new definition. As for this feature request, I think we can define thefeatures like this:1. We define a new attribute named VOLATILE_AUTO;  The semantic is PG will auto   detect the volatile info based on current search_path / existing   function. If any embedded function can't be found, we can raise an error if   VOLATILE_AUTO is used. If people change the volatile attribute later, we can:   a). do nothing. This can be the documented feature. or. b). Maintain the   dependency tree between functions and if anyone is changed, other functions   should be recalculated as well.2. VOLATILE_AUTO should never be the default value. It only works when people   requires it.Then what we can get from this?  Thinking a user is migrating lots of UDF fromother databases.  Asking them to check/set each function's attribute mightbe bad. However if we tell them about how VOLATILE_AUTO works, and theyaccept it (I guess most people would accept), then the migration would bepretty productive.I'm listening to any obvious reason to reject it.a) This analyses can be very slow - PLpgSQL does lazy planning - query plans are planned only when are required - and this feature requires complete planning current function and all nested VOLATILE_AUTO functions - so start of function can be significantly slowerActually I am thinking  we can do this when we compile the function, which means that would happen on the \"CREATE FUNCTION \" stage.   this would need some hacks for sure.  Doesthis remove your concern?  b) When you migrate from Oracle,then you can use the STABLE flag, and it will be mostly correct.This was suggested in our team as well, but I don't think it is very strict.  For example:  SELECT materialize_bills_for(userId) from users;  Any more proof to say \"STABLE\" flagis acceptable?  -- Best RegardsAndy Fan (https://www.aliyun.com/)\n\n-- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Tue, 20 Apr 2021 11:16:03 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: 2 questions about volatile attribute of pg_proc." }, { "msg_contents": "út 20. 4. 2021 v 5:16 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:\n\n>\n>\n> On Tue, Apr 20, 2021 at 10:57 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>>\n>>\n>> út 20. 4. 2021 v 4:47 odesílatel Andy Fan <zhihui.fan1213@gmail.com>\n>> napsal:\n>>\n>>>\n>>>\n>>> > - a PL/PGSQL function's meaning depends on the search path in effect\n>>> when it is called, unless it has a SET search_path clause or it fully\n>>> qualifies all object references, so it isn't actually possible in general\n>>> to determine what a function calls at definition time\n>>>\n>>>\n>>> I'd think this one as a blocker issue at the beginning since I have to\n>>> insist on\n>>> any new features should not cause semantic changes for existing ones.\n>>> Later I\n>>> found the new definition. As for this feature request, I think we can\n>>> define the\n>>> features like this:\n>>>\n>>> 1. We define a new attribute named VOLATILE_AUTO; The semantic is PG\n>>> will auto\n>>> detect the volatile info based on current search_path / existing\n>>> function. If any embedded function can't be found, we can raise an\n>>> error if\n>>> VOLATILE_AUTO is used. If people change the volatile attribute later,\n>>> we can:\n>>> a). do nothing. This can be the documented feature. or. b). Maintain\n>>> the\n>>> dependency tree between functions and if anyone is changed, other\n>>> functions\n>>> should be recalculated as well.\n>>>\n>>> 2. VOLATILE_AUTO should never be the default value. It only works when\n>>> people\n>>> requires it.\n>>>\n>>> Then what we can get from this? Thinking a user is migrating lots of\n>>> UDF from\n>>> other databases. Asking them to check/set each function's attribute\n>>> might\n>>> be bad. However if we tell them about how VOLATILE_AUTO works, and they\n>>> accept it (I guess most people would accept), then the migration would be\n>>> pretty productive.\n>>>\n>>> I'm listening to any obvious reason to reject it.\n>>>\n>>\n>> a) This analyses can be very slow - PLpgSQL does lazy planning - query\n>> plans are planned only when are required - and this feature requires\n>> complete planning current function and all nested VOLATILE_AUTO functions -\n>> so start of function can be significantly slower\n>>\n>\n> Actually I am thinking we can do this when we compile the function, which\n> means that would\n> happen on the \"CREATE FUNCTION \" stage. this would need some hacks for\n> sure. Does\n> this remove your concern?\n>\n\nyou cannot do it - with this you introduce strong dependency on nested\nobjects - and that means a lot of problems - necessity of rechecks when any\nnested object is changed. There will be new problems with dependency, when\nyou create functions, and until we have global temp tables, then it is\nblocker for usage of temporary tables. The current behavior is not perfect,\nbut in this direction is very practical, and I would not change it. Can be\nnice if some functionality of plpgsql_check can be in core, because I think\nso it is necessary for development, but the structure and integration of\nSQL in PLpgSQL is very good (and very practical).\n\n\n>\n>> b) When you migrate from Oracle,then you can use the STABLE flag, and it\n>> will be mostly correct.\n>>\n>\n> This was suggested in our team as well, but I don't think it is very\n> strict. For example:\n> SELECT materialize_bills_for(userId) from users; Any more proof to say\n> \"STABLE\" flag\n> is acceptable?\n>\n\nOracle doesn't allow write operations in functions. Or didn't allow it - I\nam not sure what is possible now. So when you migrate data from Oracle, and\nif the function is not marked as DETERMINISTIC, you can safely mark it as\nSTABLE. Ora2pg does it. Elsewhere - it works 99% well. In special cases,\nthere is some black magic - with fresh snapshots, and with using autonomous\ntransactions, and these cases should be solved manually. Sometimes is good\nenough just removing autonomous transactions, sometimes the complete\nrewrite is necessary - or redesign functionality.\n\n\n\n\n\n\n\n>\n>\n>> --\n>>> Best Regards\n>>> Andy Fan (https://www.aliyun.com/)\n>>>\n>>\n>\n> --\n> Best Regards\n> Andy Fan (https://www.aliyun.com/)\n>\n\nút 20. 4. 2021 v 5:16 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:On Tue, Apr 20, 2021 at 10:57 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:út 20. 4. 2021 v 4:47 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:> - a PL/PGSQL function's meaning depends on the search path in effect when it is called, unless it has a SET search_path clause or it fully qualifies all object references, so it isn't actually possible in general to determine what a function calls at definition timeI'd think this one as a blocker issue at the beginning since I have to insist onany new features should not cause semantic changes for existing ones. Later Ifound the new definition. As for this feature request, I think we can define thefeatures like this:1. We define a new attribute named VOLATILE_AUTO;  The semantic is PG will auto   detect the volatile info based on current search_path / existing   function. If any embedded function can't be found, we can raise an error if   VOLATILE_AUTO is used. If people change the volatile attribute later, we can:   a). do nothing. This can be the documented feature. or. b). Maintain the   dependency tree between functions and if anyone is changed, other functions   should be recalculated as well.2. VOLATILE_AUTO should never be the default value. It only works when people   requires it.Then what we can get from this?  Thinking a user is migrating lots of UDF fromother databases.  Asking them to check/set each function's attribute mightbe bad. However if we tell them about how VOLATILE_AUTO works, and theyaccept it (I guess most people would accept), then the migration would bepretty productive.I'm listening to any obvious reason to reject it.a) This analyses can be very slow - PLpgSQL does lazy planning - query plans are planned only when are required - and this feature requires complete planning current function and all nested VOLATILE_AUTO functions - so start of function can be significantly slowerActually I am thinking  we can do this when we compile the function, which means that would happen on the \"CREATE FUNCTION \" stage.   this would need some hacks for sure.  Doesthis remove your concern? you cannot do it - with this you introduce strong dependency on nested objects - and that means a lot of problems - necessity of rechecks when any nested object is changed. There will be new problems with dependency, when you create functions, and until we have global temp tables, then it is blocker for usage of temporary tables. The current behavior is not perfect, but in this direction is very practical, and I would not change it. Can be nice if some functionality of plpgsql_check can be in core, because I think so it is necessary for development, but the structure and integration of SQL in PLpgSQL is very good (and very practical). b) When you migrate from Oracle,then you can use the STABLE flag, and it will be mostly correct.This was suggested in our team as well, but I don't think it is very strict.  For example:  SELECT materialize_bills_for(userId) from users;  Any more proof to say \"STABLE\" flagis acceptable? Oracle doesn't allow write operations in functions. Or didn't allow it - I am not sure what is possible now. So when you migrate data from Oracle, and if the function is not marked as DETERMINISTIC, you can safely mark it as STABLE. Ora2pg does it. Elsewhere - it works 99% well. In special cases, there is some black magic - with fresh snapshots, and with using autonomous transactions, and these cases should be solved manually. Sometimes is good enough just removing autonomous transactions, sometimes the complete rewrite is necessary - or redesign functionality. -- Best RegardsAndy Fan (https://www.aliyun.com/)\n\n-- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Tue, 20 Apr 2021 05:31:32 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 2 questions about volatile attribute of pg_proc." }, { "msg_contents": "On Tue, Apr 20, 2021 at 11:32 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> út 20. 4. 2021 v 5:16 odesílatel Andy Fan <zhihui.fan1213@gmail.com>\n> napsal:\n>\n>>\n>>\n>> On Tue, Apr 20, 2021 at 10:57 AM Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>>\n>>>\n>>>\n>>> út 20. 4. 2021 v 4:47 odesílatel Andy Fan <zhihui.fan1213@gmail.com>\n>>> napsal:\n>>>\n>>>>\n>>>>\n>>>> > - a PL/PGSQL function's meaning depends on the search path in effect\n>>>> when it is called, unless it has a SET search_path clause or it fully\n>>>> qualifies all object references, so it isn't actually possible in general\n>>>> to determine what a function calls at definition time\n>>>>\n>>>>\n>>>> I'd think this one as a blocker issue at the beginning since I have to\n>>>> insist on\n>>>> any new features should not cause semantic changes for existing ones.\n>>>> Later I\n>>>> found the new definition. As for this feature request, I think we can\n>>>> define the\n>>>> features like this:\n>>>>\n>>>> 1. We define a new attribute named VOLATILE_AUTO; The semantic is PG\n>>>> will auto\n>>>> detect the volatile info based on current search_path / existing\n>>>> function. If any embedded function can't be found, we can raise an\n>>>> error if\n>>>> VOLATILE_AUTO is used. If people change the volatile attribute\n>>>> later, we can:\n>>>> a). do nothing. This can be the documented feature. or. b). Maintain\n>>>> the\n>>>> dependency tree between functions and if anyone is changed, other\n>>>> functions\n>>>> should be recalculated as well.\n>>>>\n>>>> 2. VOLATILE_AUTO should never be the default value. It only works when\n>>>> people\n>>>> requires it.\n>>>>\n>>>> Then what we can get from this? Thinking a user is migrating lots of\n>>>> UDF from\n>>>> other databases. Asking them to check/set each function's attribute\n>>>> might\n>>>> be bad. However if we tell them about how VOLATILE_AUTO works, and they\n>>>> accept it (I guess most people would accept), then the migration would\n>>>> be\n>>>> pretty productive.\n>>>>\n>>>> I'm listening to any obvious reason to reject it.\n>>>>\n>>>\n>>> a) This analyses can be very slow - PLpgSQL does lazy planning - query\n>>> plans are planned only when are required - and this feature requires\n>>> complete planning current function and all nested VOLATILE_AUTO functions -\n>>> so start of function can be significantly slower\n>>>\n>>\n>> Actually I am thinking we can do this when we compile the function,\n>> which means that would\n>> happen on the \"CREATE FUNCTION \" stage. this would need some hacks for\n>> sure. Does\n>> this remove your concern?\n>>\n>\n> you cannot do it - with this you introduce strong dependency on nested\n> objects\n>\n\nWhat does the plpgsql_check do in this area? I checked the README[1], but\ncan't find\nanything about it.\n\n\n> until we have global temp tables, then it is blocker for usage of\n> temporary tables.\n>\n\nCan you explain more about this?\n\n\n> Can be nice if some functionality of plpgsql_check can be in core,\n> because I think so it is necessary for development, but the structure and\n> integration of SQL in PLpgSQL is very good (and very practical).\n>\n>\nI'm interested in plpgsql_check. However I am still confused why we can do\nit in this way, but\ncan't do it in the VOLATILE_AUTO way.\n\n\n>\n>>\n>>> b) When you migrate from Oracle,then you can use the STABLE flag, and it\n>>> will be mostly correct.\n>>>\n>>\n>> This was suggested in our team as well, but I don't think it is very\n>> strict. For example:\n>> SELECT materialize_bills_for(userId) from users; Any more proof to say\n>> \"STABLE\" flag\n>> is acceptable?\n>>\n>\n> Oracle doesn't allow write operations in functions. Or didn't allow it - I\n> am not sure what is possible now. So when you migrate data from Oracle, and\n> if the function is not marked as DETERMINISTIC, you can safely mark it as\n> STABLE.\n>\n\nYou are correct. Good to know the above points.\n\n\n> Elsewhere - it works 99% well. In special cases, there is some black\n> magic - with fresh snapshots, and with using autonomous transactions, and\n> these cases should be solved manually. Sometimes is good enough just\n> removing autonomous transactions, sometimes the complete rewrite is\n> necessary - or redesign functionality.\n>\n>\nis the 1% == \"special cases\" ? Do you mind sharing more information about\nthese cases,\neither document or code?\n\n[1] https://github.com/okbob/plpgsql_check/blob/master/README.md#features\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Tue, Apr 20, 2021 at 11:32 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:út 20. 4. 2021 v 5:16 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:On Tue, Apr 20, 2021 at 10:57 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:út 20. 4. 2021 v 4:47 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:> - a PL/PGSQL function's meaning depends on the search path in effect when it is called, unless it has a SET search_path clause or it fully qualifies all object references, so it isn't actually possible in general to determine what a function calls at definition timeI'd think this one as a blocker issue at the beginning since I have to insist onany new features should not cause semantic changes for existing ones. Later Ifound the new definition. As for this feature request, I think we can define thefeatures like this:1. We define a new attribute named VOLATILE_AUTO;  The semantic is PG will auto   detect the volatile info based on current search_path / existing   function. If any embedded function can't be found, we can raise an error if   VOLATILE_AUTO is used. If people change the volatile attribute later, we can:   a). do nothing. This can be the documented feature. or. b). Maintain the   dependency tree between functions and if anyone is changed, other functions   should be recalculated as well.2. VOLATILE_AUTO should never be the default value. It only works when people   requires it.Then what we can get from this?  Thinking a user is migrating lots of UDF fromother databases.  Asking them to check/set each function's attribute mightbe bad. However if we tell them about how VOLATILE_AUTO works, and theyaccept it (I guess most people would accept), then the migration would bepretty productive.I'm listening to any obvious reason to reject it.a) This analyses can be very slow - PLpgSQL does lazy planning - query plans are planned only when are required - and this feature requires complete planning current function and all nested VOLATILE_AUTO functions - so start of function can be significantly slowerActually I am thinking  we can do this when we compile the function, which means that would happen on the \"CREATE FUNCTION \" stage.   this would need some hacks for sure.  Doesthis remove your concern? you cannot do it - with this you introduce strong dependency on nested objectsWhat does the plpgsql_check do in this area?  I checked the README[1], but can't findanything about it.   until we have global temp tables, then it is blocker for usage of temporary tables.Can you explain more about this?   Can be nice if some functionality of plpgsql_check can be in core, because I think so it is necessary for development, but the structure and integration of SQL in PLpgSQL is very good (and very practical). I'm interested in plpgsql_check.  However I am still confused why we can do it in this way, butcan't do it in the  VOLATILE_AUTO way.   b) When you migrate from Oracle,then you can use the STABLE flag, and it will be mostly correct.This was suggested in our team as well, but I don't think it is very strict.  For example:  SELECT materialize_bills_for(userId) from users;  Any more proof to say \"STABLE\" flagis acceptable? Oracle doesn't allow write operations in functions. Or didn't allow it - I am not sure what is possible now. So when you migrate data from Oracle, and if the function is not marked as DETERMINISTIC, you can safely mark it as STABLE. You are correct.  Good to know the above points.   Elsewhere - it works 99% well. In special cases, there is some black magic - with fresh snapshots, and with using autonomous transactions, and these cases should be solved manually. Sometimes is good enough just removing autonomous transactions, sometimes the complete rewrite is necessary - or redesign functionality.is the 1% == \"special cases\" ?  Do you mind sharing more information about these cases,either document or code? [1] https://github.com/okbob/plpgsql_check/blob/master/README.md#features  -- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Tue, 20 Apr 2021 13:32:05 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: 2 questions about volatile attribute of pg_proc." }, { "msg_contents": ">\n>\n>>\n>> you cannot do it - with this you introduce strong dependency on nested\n>> objects\n>>\n>\n> What does the plpgsql_check do in this area? I checked the README[1], but\n> can't find\n> anything about it.\n>\n\nWhen you run plpgsql_check with performance warning (disabled by default),\nthen it does check if all called functions are on the same or lower level\nthan checked functions have. So when all called operations are stable\n(read only), then the function can be marked as stable - and if the\nfunction is marked as volatile, then plpgsql_check raises an warning.\n\n\n>\n>> until we have global temp tables, then it is blocker for usage of\n>> temporary tables.\n>>\n>\nAll plpgsql expressions are SQL expressions - and anybody can call a\nfunction against a temporary table. But local temporary tables don't exist\nin typical CREATE FUNCTION time (registration time). Typically doesn't\nexist in plpgsql compile time too. Usually temporary tables are created\ninside executed plpgsql functions. So you cannot do any semantical (deeper)\ncheck in registration, or compile time. Just because one kind of object\n(temporary tables) doesn't exist. This is a difficult issue for\nplpgsql_check too.\n\n\n> Can you explain more about this?\n>\n>\n>> Can be nice if some functionality of plpgsql_check can be in core,\n>> because I think so it is necessary for development, but the structure and\n>> integration of SQL in PLpgSQL is very good (and very practical).\n>>\n>>\n> I'm interested in plpgsql_check. However I am still confused why we can\n> do it in this way, but\n> can't do it in the VOLATILE_AUTO way.\n>\n\nYou can do it. But you solve one issue, and introduce new kinds of more\nterrible issues (related to dependencies between database's objects). The\ndesign of plpgsql is pretty different from the design of Oracle's PL/SQL.\nSo proposed change means total conceptual change, and you need to write a\nlot of new code that will provide necessary infrastructure. I am not sure\nif the benefit is higher than the cost. It can be usable, if plpgsql can be\nreally compiled to some machine code - but it means ten thousands codes\nwithout significant benefits - the bottleneck inside stored procedures is\nSQL, and the compilation doesn't help with it.\n\n\n>\n>>\n>>>\n>>>> b) When you migrate from Oracle,then you can use the STABLE flag, and\n>>>> it will be mostly correct.\n>>>>\n>>>\n>>> This was suggested in our team as well, but I don't think it is very\n>>> strict. For example:\n>>> SELECT materialize_bills_for(userId) from users; Any more proof to say\n>>> \"STABLE\" flag\n>>> is acceptable?\n>>>\n>>\n>> Oracle doesn't allow write operations in functions. Or didn't allow it -\n>> I am not sure what is possible now. So when you migrate data from Oracle,\n>> and if the function is not marked as DETERMINISTIC, you can safely mark it\n>> as STABLE.\n>>\n>\n> You are correct. Good to know the above points.\n>\n\nAnd DETERMINISTIC functions are IMMUTABLE on Postgres's side\n\n\n>\n>> Elsewhere - it works 99% well. In special cases, there is some black\n>> magic - with fresh snapshots, and with using autonomous transactions, and\n>> these cases should be solved manually. Sometimes is good enough just\n>> removing autonomous transactions, sometimes the complete rewrite is\n>> necessary - or redesign functionality.\n>>\n>>\n> is the 1% == \"special cases\" ? Do you mind sharing more information about\n> these cases,\n> either document or code?\n>\n\nUnfortunately not. I have not well structured notes from work on ports from\nOracle to Postgres. And these 1% cases are very very variable. People are\nvery creative. But usually this code is almost very dirty, and not\ncritical. In Postgres we can use LISTEN, NOTIFY, or possibility to set\napp_name or we can use RAISE NOTICE.\n\n\n\n> [1] https://github.com/okbob/plpgsql_check/blob/master/README.md#features\n>\n>\n> --\n> Best Regards\n> Andy Fan (https://www.aliyun.com/)\n>\n\nyou cannot do it - with this you introduce strong dependency on nested objectsWhat does the plpgsql_check do in this area?  I checked the README[1], but can't findanything about it.  When you run plpgsql_check with performance warning (disabled by default), then it does check if all called functions are on the same or lower level than checked functions have.  So when all called operations are stable (read only), then the function can be marked as stable - and if the function is marked as volatile, then plpgsql_check raises an warning. until we have global temp tables, then it is blocker for usage of temporary tables.All plpgsql expressions are SQL expressions - and anybody can call a function against a temporary table. But local temporary tables don't exist in typical CREATE FUNCTION time (registration time). Typically doesn't exist in plpgsql compile time too. Usually temporary tables are created inside executed plpgsql functions. So you cannot do any semantical (deeper) check in registration, or compile time. Just because one kind of object (temporary tables) doesn't exist. This is a difficult issue for plpgsql_check too.Can you explain more about this?   Can be nice if some functionality of plpgsql_check can be in core, because I think so it is necessary for development, but the structure and integration of SQL in PLpgSQL is very good (and very practical). I'm interested in plpgsql_check.  However I am still confused why we can do it in this way, butcan't do it in the  VOLATILE_AUTO way. You can do it. But you solve one issue, and introduce new kinds of more terrible issues (related to dependencies between database's objects). The design of plpgsql is pretty different from the design of Oracle's PL/SQL. So proposed change means total conceptual change, and you need to write a lot of new code that will provide necessary infrastructure. I am not sure if the benefit is higher than the cost. It can be usable, if plpgsql can be really compiled to some machine code - but it means ten thousands codes without significant benefits - the bottleneck inside stored procedures is SQL, and the compilation doesn't help with it.   b) When you migrate from Oracle,then you can use the STABLE flag, and it will be mostly correct.This was suggested in our team as well, but I don't think it is very strict.  For example:  SELECT materialize_bills_for(userId) from users;  Any more proof to say \"STABLE\" flagis acceptable? Oracle doesn't allow write operations in functions. Or didn't allow it - I am not sure what is possible now. So when you migrate data from Oracle, and if the function is not marked as DETERMINISTIC, you can safely mark it as STABLE. You are correct.  Good to know the above points. And DETERMINISTIC functions are IMMUTABLE on Postgres's side   Elsewhere - it works 99% well. In special cases, there is some black magic - with fresh snapshots, and with using autonomous transactions, and these cases should be solved manually. Sometimes is good enough just removing autonomous transactions, sometimes the complete rewrite is necessary - or redesign functionality.is the 1% == \"special cases\" ?  Do you mind sharing more information about these cases,either document or code? Unfortunately not. I have not well structured notes from work on ports from Oracle to Postgres. And these 1% cases are very very variable. People are very creative. But usually this code is almost very dirty, and not critical. In Postgres we can use LISTEN, NOTIFY, or possibility to set app_name or we can use RAISE NOTICE. [1] https://github.com/okbob/plpgsql_check/blob/master/README.md#features  -- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Tue, 20 Apr 2021 07:58:05 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 2 questions about volatile attribute of pg_proc." } ]
[ { "msg_contents": "Hi,\n\nI assumed the cost for each nested VIEW layer would grow linear,\nbut my testing shows it appears to grow exponentially:\n\nCREATE TABLE foo (bar int);\nINSERT INTO foo (bar) VALUES (123);\n\nDO $_$\nDECLARE\nBEGIN\nCREATE OR REPLACE VIEW v1 AS SELECT * FROM foo;\nFOR i IN 1..256 LOOP\n EXECUTE format\n (\n $$\n CREATE OR REPLACE VIEW v%s AS\n SELECT * FROM v%s\n $$,\n i+1,\n i\n );\nEND LOOP;\nEND\n$_$;\n\nEXPLAIN ANALYZE SELECT * FROM foo;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------\nSeq Scan on foo (cost=0.00..35.50 rows=2550 width=4) (actual time=0.004..0.004 rows=1 loops=1)\nPlanning Time: 0.117 ms\nExecution Time: 0.011 ms\n(3 rows)\n\nEXPLAIN ANALYZE SELECT * FROM v1;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------\nSeq Scan on foo (cost=0.00..35.50 rows=2550 width=4) (actual time=0.002..0.003 rows=1 loops=1)\nPlanning Time: 0.019 ms\nExecution Time: 0.015 ms\n(3 rows)\n\nEXPLAIN ANALYZE SELECT * FROM v2;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------\nSeq Scan on foo (cost=0.00..35.50 rows=2550 width=4) (actual time=0.002..0.002 rows=1 loops=1)\nPlanning Time: 0.018 ms\nExecution Time: 0.011 ms\n(3 rows)\n\nEXPLAIN ANALYZE SELECT * FROM v4;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------\nSeq Scan on foo (cost=0.00..35.50 rows=2550 width=4) (actual time=0.002..0.002 rows=1 loops=1)\nPlanning Time: 0.030 ms\nExecution Time: 0.013 ms\n(3 rows)\n\nEXPLAIN ANALYZE SELECT * FROM v8;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------\nSeq Scan on foo (cost=0.00..35.50 rows=2550 width=4) (actual time=0.002..0.002 rows=1 loops=1)\nPlanning Time: 0.061 ms\nExecution Time: 0.016 ms\n(3 rows)\n\nEXPLAIN ANALYZE SELECT * FROM v16;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------\nSeq Scan on foo (cost=0.00..35.50 rows=2550 width=4) (actual time=0.002..0.003 rows=1 loops=1)\nPlanning Time: 0.347 ms\nExecution Time: 0.027 ms\n(3 rows)\n\nEXPLAIN ANALYZE SELECT * FROM v32;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------\nSeq Scan on foo (cost=0.00..35.50 rows=2550 width=4) (actual time=0.002..0.003 rows=1 loops=1)\nPlanning Time: 2.096 ms\nExecution Time: 0.044 ms\n(3 rows)\n\nEXPLAIN ANALYZE SELECT * FROM v64;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------\nSeq Scan on foo (cost=0.00..35.50 rows=2550 width=4) (actual time=0.004..0.005 rows=1 loops=1)\nPlanning Time: 14.981 ms\nExecution Time: 0.119 ms\n(3 rows)\n\nEXPLAIN ANALYZE SELECT * FROM v128;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------\nSeq Scan on foo (cost=0.00..35.50 rows=2550 width=4) (actual time=0.004..0.004 rows=1 loops=1)\nPlanning Time: 109.407 ms\nExecution Time: 0.187 ms\n(3 rows)\n\nEXPLAIN ANALYZE SELECT * FROM v256;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------\nSeq Scan on foo (cost=0.00..35.50 rows=2550 width=4) (actual time=0.006..0.007 rows=1 loops=1)\nPlanning Time: 1594.809 ms\nExecution Time: 0.531 ms\n(3 rows)\nHi,I assumed the cost for each nested VIEW layer would grow linear,but my testing shows it appears to grow exponentially:CREATE TABLE foo (bar int);INSERT INTO foo (bar) VALUES (123);DO $_$DECLAREBEGINCREATE OR REPLACE VIEW v1 AS SELECT * FROM foo;FOR i IN 1..256 LOOP  EXECUTE format  (    $$      CREATE OR REPLACE VIEW v%s AS      SELECT * FROM v%s    $$,    i+1,    i  );END LOOP;END$_$;EXPLAIN ANALYZE SELECT * FROM foo;                                           QUERY PLAN-------------------------------------------------------------------------------------------------Seq Scan on foo  (cost=0.00..35.50 rows=2550 width=4) (actual time=0.004..0.004 rows=1 loops=1)Planning Time: 0.117 msExecution Time: 0.011 ms(3 rows)EXPLAIN ANALYZE SELECT * FROM v1;                                           QUERY PLAN-------------------------------------------------------------------------------------------------Seq Scan on foo  (cost=0.00..35.50 rows=2550 width=4) (actual time=0.002..0.003 rows=1 loops=1)Planning Time: 0.019 msExecution Time: 0.015 ms(3 rows)EXPLAIN ANALYZE SELECT * FROM v2;                                           QUERY PLAN-------------------------------------------------------------------------------------------------Seq Scan on foo  (cost=0.00..35.50 rows=2550 width=4) (actual time=0.002..0.002 rows=1 loops=1)Planning Time: 0.018 msExecution Time: 0.011 ms(3 rows)EXPLAIN ANALYZE SELECT * FROM v4;                                           QUERY PLAN-------------------------------------------------------------------------------------------------Seq Scan on foo  (cost=0.00..35.50 rows=2550 width=4) (actual time=0.002..0.002 rows=1 loops=1)Planning Time: 0.030 msExecution Time: 0.013 ms(3 rows)EXPLAIN ANALYZE SELECT * FROM v8;                                           QUERY PLAN-------------------------------------------------------------------------------------------------Seq Scan on foo  (cost=0.00..35.50 rows=2550 width=4) (actual time=0.002..0.002 rows=1 loops=1)Planning Time: 0.061 msExecution Time: 0.016 ms(3 rows)EXPLAIN ANALYZE SELECT * FROM v16;                                           QUERY PLAN-------------------------------------------------------------------------------------------------Seq Scan on foo  (cost=0.00..35.50 rows=2550 width=4) (actual time=0.002..0.003 rows=1 loops=1)Planning Time: 0.347 msExecution Time: 0.027 ms(3 rows)EXPLAIN ANALYZE SELECT * FROM v32;                                           QUERY PLAN-------------------------------------------------------------------------------------------------Seq Scan on foo  (cost=0.00..35.50 rows=2550 width=4) (actual time=0.002..0.003 rows=1 loops=1)Planning Time: 2.096 msExecution Time: 0.044 ms(3 rows)EXPLAIN ANALYZE SELECT * FROM v64;                                           QUERY PLAN-------------------------------------------------------------------------------------------------Seq Scan on foo  (cost=0.00..35.50 rows=2550 width=4) (actual time=0.004..0.005 rows=1 loops=1)Planning Time: 14.981 msExecution Time: 0.119 ms(3 rows)EXPLAIN ANALYZE SELECT * FROM v128;                                           QUERY PLAN-------------------------------------------------------------------------------------------------Seq Scan on foo  (cost=0.00..35.50 rows=2550 width=4) (actual time=0.004..0.004 rows=1 loops=1)Planning Time: 109.407 msExecution Time: 0.187 ms(3 rows)EXPLAIN ANALYZE SELECT * FROM v256;                                           QUERY PLAN-------------------------------------------------------------------------------------------------Seq Scan on foo  (cost=0.00..35.50 rows=2550 width=4) (actual time=0.006..0.007 rows=1 loops=1)Planning Time: 1594.809 msExecution Time: 0.531 ms(3 rows)", "msg_date": "Sun, 18 Apr 2021 20:58:53 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Planning time grows exponentially with levels of nested views" }, { "msg_contents": "[ redirecting to -hackers so the cfbot can see it ]\n\n\"Joel Jacobson\" <joel@compiler.org> writes:\n> I assumed the cost for each nested VIEW layer would grow linear,\n> but my testing shows it appears to grow exponentially:\n\nI think it's impossible to avoid less-than-O(N^2) growth on this sort\nof case. For example, the v2 subquery initially has RTEs for v2 itself\nplus v1. When we flatten v1 into v2, v2 acquires the RTEs from v1,\nnamely v1 itself plus foo. Similarly, once vK-1 is pulled up into vK,\nthere are going to be order-of-K entries in vK's rtable, and that stacking\nmakes for O(N^2) work overall just in manipulating the rtable.\n\nWe can't get rid of these rtable entries altogether, since all of them\nrepresent table privilege checks that the executor will need to do.\nIt occurs to me though that we don't need the rte->subquery trees anymore\nonce those are flattened, so maybe we could do something like the\nattached. For me, this reduces the slowdown in your example from\nO(N^3) to O(N^2).\n\nI'm slightly worried though by this comment earlier in\npull_up_simple_subquery:\n\n /*\n * Need a modifiable copy of the subquery to hack on. Even if we didn't\n * sometimes choose not to pull up below, we must do this to avoid\n * problems if the same subquery is referenced from multiple jointree\n * items (which can't happen normally, but might after rule rewriting).\n */\n\nIf multiple references are actually possible then this'd break it. There\nseem to be no such cases in the regression tests though, and I'm having a\nhard time wrapping my brain around what would cause it. \"git blame\"\ntraces this text to my own commit f44639e1b, which has the log entry\n\n Don't crash if subquery appears multiple times in jointree. This should\n not happen anyway, but let's try not to get completely confused if it does\n (due to rewriter bugs or whatever).\n\nso I'm thinking that this was only hypothetical.\n\nIt's possible that we could do something similar in the sibling functions\npull_up_simple_union_all etc, but I'm not sure it's worth troubling over.\nTBH, for the size of effect you're showing here, I wouldn't be worried\nat all; it's only because it seems to be a one-liner to improve it that\nI'm interested in doing something.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 18 Apr 2021 16:14:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Planning time grows exponentially with levels of nested views" }, { "msg_contents": "I wrote:\n> If multiple references are actually possible then this'd break it. There\n> seem to be no such cases in the regression tests though, and I'm having a\n> hard time wrapping my brain around what would cause it. \"git blame\"\n> traces this text to my own commit f44639e1b, which has the log entry\n> Don't crash if subquery appears multiple times in jointree. This should\n> not happen anyway, but let's try not to get completely confused if it does\n> (due to rewriter bugs or whatever).\n> so I'm thinking that this was only hypothetical.\n\nAh, found it. That was evidently a reaction to the immediately preceding\ncommit (352871ac9), which fixed a rewriter bug that could lead to exactly\nthe case of multiple jointree references to one RTE.\n\nI think this patch doesn't make things any worse for such a case though.\nIf we re-introduced such a bug, the result would be an immediate null\npointer crash while trying to process the second reference to a\nflattenable subquery. That's probably better for debuggability than\nwhat happens now, where we just silently process the duplicate reference.\n\nAnyway, I've stuck this into the next CF for future consideration.\n\n\t\t\tregards, tom lane\n\nPS: to save time for anyone else who wants to investigate this,\nit looks like the report mentioned in 352871ac9 was\n\nhttps://www.postgresql.org/message-id/007401c0860d%24bed809a0%241001a8c0%40archonet.com\n\n\n", "msg_date": "Sun, 18 Apr 2021 16:41:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Planning time grows exponentially with levels of nested views" }, { "msg_contents": "On Sun, Apr 18, 2021, at 22:14, Tom Lane wrote:\n> \"Joel Jacobson\" <joel@compiler.org <mailto:joel%40compiler.org>> writes:\n> > I assumed the cost for each nested VIEW layer would grow linear,\n> > but my testing shows it appears to grow exponentially:\n> \n> I think it's impossible to avoid less-than-O(N^2) growth on this sort\n> of case. For example, the v2 subquery initially has RTEs for v2 itself\n> plus v1. When we flatten v1 into v2, v2 acquires the RTEs from v1,\n> namely v1 itself plus foo. Similarly, once vK-1 is pulled up into vK,\n> there are going to be order-of-K entries in vK's rtable, and that stacking\n> makes for O(N^2) work overall just in manipulating the rtable.\n> \n> We can't get rid of these rtable entries altogether, since all of them\n> represent table privilege checks that the executor will need to do.\n> It occurs to me though that we don't need the rte->subquery trees anymore\n> once those are flattened, so maybe we could do something like the\n> attached. For me, this reduces the slowdown in your example from\n> O(N^3) to O(N^2).\n\nMany thanks for explaining and the patch.\n\nI've tested the patch successfully.\nPlanning time grows much slower now:\n\nEXPLAIN ANALYZE SELECT * FROM v64;\n- Planning Time: 14.981 ms\n+ Planning Time: 2.802 ms\n\nEXPLAIN ANALYZE SELECT * FROM v128;\n- Planning Time: 109.407 ms\n+ Planning Time: 11.595 ms\n\nEXPLAIN ANALYZE SELECT * FROM v256;\n- Planning Time: 1594.809 ms\n+ Planning Time: 46.709 ms\n\nVery nice.\n\n/Joel\nOn Sun, Apr 18, 2021, at 22:14, Tom Lane wrote:\"Joel Jacobson\" <joel@compiler.org> writes:> I assumed the cost for each nested VIEW layer would grow linear,> but my testing shows it appears to grow exponentially:I think it's impossible to avoid less-than-O(N^2) growth on this sortof case.  For example, the v2 subquery initially has RTEs for v2 itselfplus v1.  When we flatten v1 into v2, v2 acquires the RTEs from v1,namely v1 itself plus foo.  Similarly, once vK-1 is pulled up into vK,there are going to be order-of-K entries in vK's rtable, and that stackingmakes for O(N^2) work overall just in manipulating the rtable.We can't get rid of these rtable entries altogether, since all of themrepresent table privilege checks that the executor will need to do.It occurs to me though that we don't need the rte->subquery trees anymoreonce those are flattened, so maybe we could do something like theattached.  For me, this reduces the slowdown in your example fromO(N^3) to O(N^2).Many thanks for explaining and the patch.I've tested the patch successfully.Planning time grows much slower now:EXPLAIN ANALYZE SELECT * FROM v64;- Planning Time: 14.981 ms+ Planning Time: 2.802 msEXPLAIN ANALYZE SELECT * FROM v128;- Planning Time: 109.407 ms+ Planning Time: 11.595 msEXPLAIN ANALYZE SELECT * FROM v256;- Planning Time: 1594.809 ms+ Planning Time: 46.709 msVery nice./Joel", "msg_date": "Sun, 18 Apr 2021 22:42:11 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Planning time grows exponentially with levels of nested views" }, { "msg_contents": "On Sun, 18 Apr 2021 at 21:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > If multiple references are actually possible then this'd break it.\n>\n> I think this patch doesn't make things any worse for such a case though.\n> If we re-introduced such a bug, the result would be an immediate null\n> pointer crash while trying to process the second reference to a\n> flattenable subquery. That's probably better for debuggability than\n> what happens now, where we just silently process the duplicate reference.\n>\n\nI took a look at this and wasn't able to find any way to break it, and\nyour argument that it can't really make such rewriter bugs any worse\nmakes sense.\n\nWould it make sense to update the comment prior to copying the subquery?\n\nOut of curiosity, I also tested DML against these deeply nested views\nto see how the pull-up code in the rewriter compares in terms of\nperformance, since it does a very similar job. As expected, it's\nO(N^2) as well, but it's about an order of magnitude faster:\n\n(times to run a plain EXPLAIN in ms, with patch)\n\n | SELECT | INSERT | UPDATE | DELETE\n-----+--------+--------+--------+--------\nv64 | 1.259 | 0.189 | 0.250 | 0.187\nv128 | 5.035 | 0.506 | 0.547 | 0.509\nv256 | 20.393 | 1.633 | 1.607 | 1.638\nv512 | 81.101 | 6.649 | 6.517 | 6.699\n\nMaybe that's not surprising, since there's less to do at that stage.\nAnyway, it's reassuring to know that it copes OK with this (I've seen\nsome quite deeply nested views in practice, but never that deep).\n\nFor comparison, this is what SELECT performance looked like for me\nwithout the patch:\n\n | SELECT\n-----+----------\nv64 | 9.589\nv128 | 73.292\nv256 | 826.964\nv512 | 7844.419\n\nso, for a one-line change, that's pretty impressive.\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 6 Jul 2021 18:32:04 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Planning time grows exponentially with levels of nested views" }, { "msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> I took a look at this and wasn't able to find any way to break it, and\n> your argument that it can't really make such rewriter bugs any worse\n> makes sense.\n\nThanks for looking!\n\n> Would it make sense to update the comment prior to copying the subquery?\n\nYeah, I hadn't touched that yet because the question was exactly about\nwhether it's correct or not. I think we can shorten it to\n\n * Need a modifiable copy of the subquery to hack on, so that the\n * RTE can be left unchanged in case we decide below that we can't\n * pull it up after all.\n\n> Out of curiosity, I also tested DML against these deeply nested views\n> to see how the pull-up code in the rewriter compares in terms of\n> performance, since it does a very similar job. As expected, it's\n> O(N^2) as well, but it's about an order of magnitude faster:\n\nOh good. I hadn't thought to look at that angle of things.\n\n> ... for a one-line change, that's pretty impressive.\n\nYeah. The problem might get less bad if we get anywhere with the\nidea I suggested at [1]. If we can reduce the number of RTEs\nin a view's query, then copying it would get cheaper. Still,\nnot copying it at all is always going to be better. I'll go\nahead and push the patch.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/697679.1625154303%40sss.pgh.pa.us\n\n\n", "msg_date": "Tue, 06 Jul 2021 13:52:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Planning time grows exponentially with levels of nested views" } ]
[ { "msg_contents": "Hi,\n\nThe CREATE SUBSCRIPTION documentation [1] includes a list of \"WITH\"\noptions, which are currently in some kind of quasi alphabetical /\nrandom order which I found unnecessarily confusing.\n\nI can't think of any good reason for the current ordering, so PSA my\npatch which has identical content but just re-orders that option list\nto be alphabetical.\n\n------\n[1] = https://www.postgresql.org/docs/devel/sql-createsubscription.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 19 Apr 2021 09:59:44 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "PG Docs - CREATE SUBSCRIPTION option list order" }, { "msg_contents": "On Sun, Apr 18, 2021, at 8:59 PM, Peter Smith wrote:\n> The CREATE SUBSCRIPTION documentation [1] includes a list of \"WITH\"\n> options, which are currently in some kind of quasi alphabetical /\n> random order which I found unnecessarily confusing.\n> \n> I can't think of any good reason for the current ordering, so PSA my\n> patch which has identical content but just re-orders that option list\n> to be alphabetical.\nAFAICS there is not reason to use a random order here. I think this parameter\nlist is in frequency of use. Your patch looks good to me.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Sun, Apr 18, 2021, at 8:59 PM, Peter Smith wrote:The CREATE SUBSCRIPTION documentation [1] includes a list of \"WITH\"options, which are currently in some kind of quasi alphabetical /random order which I found unnecessarily confusing.I can't think of any good reason for the current ordering, so PSA mypatch which has identical content but just re-orders that option listto be alphabetical.AFAICS there is not reason to use a random order here. I think this parameterlist is in frequency of use. Your patch looks good to me.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Sun, 18 Apr 2021 22:01:36 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: PG Docs - CREATE SUBSCRIPTION option list order" }, { "msg_contents": "On Mon, Apr 19, 2021 at 6:32 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Sun, Apr 18, 2021, at 8:59 PM, Peter Smith wrote:\n>\n> The CREATE SUBSCRIPTION documentation [1] includes a list of \"WITH\"\n> options, which are currently in some kind of quasi alphabetical /\n> random order which I found unnecessarily confusing.\n>\n> I can't think of any good reason for the current ordering, so PSA my\n> patch which has identical content but just re-orders that option list\n> to be alphabetical.\n>\n> AFAICS there is not reason to use a random order here. I think this parameter\n> list is in frequency of use. Your patch looks good to me.\n>\n\nI also agree that the current order is quite random. One idea is to\nkeep them in alphabetical order as suggested by Peter and the other\ncould be to arrange parameters based on properties, for example, there\nare few parameters like binary, streaming, copy_data which are in some\nway related to the data being replicated and others are more of slot\nproperties (create_slot, slot_name). I see that few parameters among\nthese have some dependencies on other parameters as well. I noticed\nthat the other DDL commands like Create Table, Create Index doesn't\nhave the WITH clause parameters in any alphabetical order so it might\nbe better if the related parameters can be together here but if we\nthink it is not a good idea in this case due to some reason then\nprobably keeping them in alphabetical order makes sense.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 19 Apr 2021 09:38:54 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG Docs - CREATE SUBSCRIPTION option list order" }, { "msg_contents": "On Mon, Apr 19, 2021 at 2:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Apr 19, 2021 at 6:32 AM Euler Taveira <euler@eulerto.com> wrote:\n> >\n> > On Sun, Apr 18, 2021, at 8:59 PM, Peter Smith wrote:\n> >\n> > The CREATE SUBSCRIPTION documentation [1] includes a list of \"WITH\"\n> > options, which are currently in some kind of quasi alphabetical /\n> > random order which I found unnecessarily confusing.\n> >\n> > I can't think of any good reason for the current ordering, so PSA my\n> > patch which has identical content but just re-orders that option list\n> > to be alphabetical.\n> >\n> > AFAICS there is not reason to use a random order here. I think this parameter\n> > list is in frequency of use. Your patch looks good to me.\n> >\n>\n> I also agree that the current order is quite random. One idea is to\n> keep them in alphabetical order as suggested by Peter and the other\n> could be to arrange parameters based on properties, for example, there\n> are few parameters like binary, streaming, copy_data which are in some\n> way related to the data being replicated and others are more of slot\n> properties (create_slot, slot_name). I see that few parameters among\n> these have some dependencies on other parameters as well. I noticed\n> that the other DDL commands like Create Table, Create Index doesn't\n> have the WITH clause parameters in any alphabetical order so it might\n> be better if the related parameters can be together here but if we\n> think it is not a good idea in this case due to some reason then\n> probably keeping them in alphabetical order makes sense.\n>\n\nYes, if there were dozens of list items then I would agree that they\nshould be grouped somehow. But there aren't.\n\nI think what may seem like a clever grouping to one reader may look\nmore like an over-complicated muddle to somebody else.\n\nSo I prefer just to apply the KISS Principle.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Mon, 19 Apr 2021 15:02:32 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PG Docs - CREATE SUBSCRIPTION option list order" }, { "msg_contents": "On Mon, Apr 19, 2021 at 10:32 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Mon, Apr 19, 2021 at 2:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Apr 19, 2021 at 6:32 AM Euler Taveira <euler@eulerto.com> wrote:\n> > >\n> > > On Sun, Apr 18, 2021, at 8:59 PM, Peter Smith wrote:\n> > >\n> > > The CREATE SUBSCRIPTION documentation [1] includes a list of \"WITH\"\n> > > options, which are currently in some kind of quasi alphabetical /\n> > > random order which I found unnecessarily confusing.\n> > >\n> > > I can't think of any good reason for the current ordering, so PSA my\n> > > patch which has identical content but just re-orders that option list\n> > > to be alphabetical.\n> > >\n> > > AFAICS there is not reason to use a random order here. I think this parameter\n> > > list is in frequency of use. Your patch looks good to me.\n> > >\n> >\n> > I also agree that the current order is quite random. One idea is to\n> > keep them in alphabetical order as suggested by Peter and the other\n> > could be to arrange parameters based on properties, for example, there\n> > are few parameters like binary, streaming, copy_data which are in some\n> > way related to the data being replicated and others are more of slot\n> > properties (create_slot, slot_name). I see that few parameters among\n> > these have some dependencies on other parameters as well. I noticed\n> > that the other DDL commands like Create Table, Create Index doesn't\n> > have the WITH clause parameters in any alphabetical order so it might\n> > be better if the related parameters can be together here but if we\n> > think it is not a good idea in this case due to some reason then\n> > probably keeping them in alphabetical order makes sense.\n> >\n>\n> Yes, if there were dozens of list items then I would agree that they\n> should be grouped somehow. But there aren't.\n>\n\nI think this list is going to grow in the future as we enhance this\nsubsystem. For example, the pending 2PC patch will add another\nparameter to this list.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 19 Apr 2021 10:46:31 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG Docs - CREATE SUBSCRIPTION option list order" }, { "msg_contents": "v1 -> v2\n\nRebased.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 9 Aug 2021 12:42:35 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PG Docs - CREATE SUBSCRIPTION option list order" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Mon, Apr 19, 2021 at 10:32 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>> Yes, if there were dozens of list items then I would agree that they\n>> should be grouped somehow. But there aren't.\n\n> I think this list is going to grow in the future as we enhance this\n> subsystem. For example, the pending 2PC patch will add another\n> parameter to this list.\n\nWell, we've got nine now; growing to ten wouldn't be a lot. However,\nI think that grouping the options would be helpful because the existing\npresentation seems extremely confusing. In particular, I think it might\nhelp to separate the options that only determine what happens during\nCREATE SUBSCRIPTION from those that control how replication behaves later.\n(Are the latter set the same ones that are shared with ALTER\nSUBSCRIPTION?)\n\nAlso, I think a lot of these descriptions desperately need copy-editing.\nThe grammar is shoddy and the markup is inconsistent.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 04 Sep 2021 14:53:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG Docs - CREATE SUBSCRIPTION option list order" }, { "msg_contents": "On Sun, Sep 5, 2021 at 12:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Mon, Apr 19, 2021 at 10:32 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >> Yes, if there were dozens of list items then I would agree that they\n> >> should be grouped somehow. But there aren't.\n>\n> > I think this list is going to grow in the future as we enhance this\n> > subsystem. For example, the pending 2PC patch will add another\n> > parameter to this list.\n>\n> Well, we've got nine now; growing to ten wouldn't be a lot. However,\n> I think that grouping the options would be helpful because the existing\n> presentation seems extremely confusing. In particular, I think it might\n> help to separate the options that only determine what happens during\n> CREATE SUBSCRIPTION from those that control how replication behaves later.\n>\n\n+1. I think we can group them as (a) create_slot, slot_name, enabled,\nconnect, and (b) copy_data, synchronous_commit, binary, streaming,\ntwo_phase. The first controls what happens during Create Subscription\nand the later ones control the replication behavior later.\n\n> (Are the latter set the same ones that are shared with ALTER\n> SUBSCRIPTION?)\n>\n\nIf we agree with the above categorization then not all of them fall\ninto the latter category.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 6 Sep 2021 08:44:02 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG Docs - CREATE SUBSCRIPTION option list order" }, { "msg_contents": "v2 --> v3\n\nThe subscription_parameter names are now split into 2 groups using\nAmit's suggestion [1] on how to categorise them.\n\nI also made some grammar improvements to their descriptions.\n\nPSA.\n\n------\n[1] https://www.postgresql.org/message-id/CAA4eK1Kmu74xHk2jcHTmKq8HBj3xK6n%3DRfiJB6dfV5zVSqqiFg%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 8 Sep 2021 16:54:01 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PG Docs - CREATE SUBSCRIPTION option list order" }, { "msg_contents": "On Wed, Sep 8, 2021 at 12:24 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> v2 --> v3\n>\n> The subscription_parameter names are now split into 2 groups using\n> Amit's suggestion [1] on how to categorise them.\n>\n> I also made some grammar improvements to their descriptions.\n>\n\nI have made minor edits to your first patch and it looks good to me. I\nam not sure what exactly Tom has in mind related to grammatical\nimprovements, so it is better if he can look into that part of your\nproposal (basically second patch\nv4-0002-PG-Docs-Create-Subscription-options-rewording).\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 9 Sep 2021 09:50:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG Docs - CREATE SUBSCRIPTION option list order" }, { "msg_contents": "On Thu, Sep 9, 2021 at 9:50 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Sep 8, 2021 at 12:24 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > v2 --> v3\n> >\n> > The subscription_parameter names are now split into 2 groups using\n> > Amit's suggestion [1] on how to categorise them.\n> >\n> > I also made some grammar improvements to their descriptions.\n> >\n>\n> I have made minor edits to your first patch and it looks good to me.\n>\n\nPushed the first patch. I am not so sure about the second one so I\nwon't do anything for the same. I'll close this CF entry in a day or\ntwo unless there is an interest in the second patch.\n\nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 13 Sep 2021 11:04:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG Docs - CREATE SUBSCRIPTION option list order" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> Pushed the first patch. I am not so sure about the second one so I\n> won't do anything for the same. I'll close this CF entry in a day or\n> two unless there is an interest in the second patch.\n\nSorry for not reviewing this more promptly.\n\nI made some further edits in the 0002 patch and pushed it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Sep 2021 14:28:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG Docs - CREATE SUBSCRIPTION option list order" }, { "msg_contents": "On Mon, Sep 13, 2021 at 11:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > Pushed the first patch. I am not so sure about the second one so I\n> > won't do anything for the same. I'll close this CF entry in a day or\n> > two unless there is an interest in the second patch.\n>\n> Sorry for not reviewing this more promptly.\n>\n> I made some further edits in the 0002 patch and pushed it.\n>\n\nThanks.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 14 Sep 2021 08:39:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG Docs - CREATE SUBSCRIPTION option list order" } ]
[ { "msg_contents": "Hi,\n\nMoving this topic into its own thread from the one about collation\nversions, because it concerns pre-existing problems, and that thread\nis long.\n\nCurrently initdb sets up template databases with old-style Windows\nlocale names reported by the OS, and they seem to have caused us quite\na few problems over the years:\n\ndb29620d \"Work around Windows locale name with non-ASCII character.\"\naa1d2fc5 \"Another attempt at fixing Windows Norwegian locale.\"\ndb477b69 \"Deal with yet another issue related to \"Norwegian (Bokmål)\"...\"\n9f12a3b9 \"Tolerate version lookup failure for old style Windows locale...\"\n\n... and probably more, and also various threads about , for example,\n\"German_German.1252\" vs \"German_Switzerland.1252\" which seem to get\nconfused or badly canonicalised or rejected somewhere in the mix.\n\nI hadn't focused on any of that before, being a non-Windows-user, but\nthe entire contents of win32setlocale.c supports the theory that\nWindows' manual meant what it said when it said[1]:\n\n\"We do not recommend this form for locale strings embedded in\ncode or serialized to storage, because these strings are more likely\nto be changed by an operating system update than the locale name\nform.\"\n\nI suppose that was the only form available at the time the code was\nwritten, so there was no choice. The question we asked ourselves\nmultiple times in the other thread was how we're supposed to get to\nthe modern BCP 47 form when creating the template databases. It looks\nlike one possibility, since Vista, is to call\nGetUserDefaultLocaleName()[2], which doesn't appear to have been\ndiscussed before on this list. That doesn't allow you to ask for the\ndefault for each individual category, but I don't know if that is even\na concept for Windows user settings. It may be that some of the other\nnearby functions give a better answer for some reason. But one thing\nis clear from a test that someone kindly ran for me: it reports\nstandardised strings like \"en-NZ\", not strings like \"English_New\nZealand.1252\".\n\nNo patch, but I wondered if any Windows hackers have any feedback on\nrelative sanity of trying to fix all these problems this way.\n\n[1] https://docs.microsoft.com/en-us/cpp/c-runtime-library/locale-names-languages-and-country-region-strings?view=msvc-160\n[2] https://docs.microsoft.com/en-us/windows/win32/api/winnls/nf-winnls-getuserdefaultlocalename\n\n\n", "msg_date": "Mon, 19 Apr 2021 17:42:51 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Windows default locale vs initdb" }, { "msg_contents": "po 19. 4. 2021 v 7:43 odesílatel Thomas Munro <thomas.munro@gmail.com>\nnapsal:\n\n> Hi,\n>\n> Moving this topic into its own thread from the one about collation\n> versions, because it concerns pre-existing problems, and that thread\n> is long.\n>\n> Currently initdb sets up template databases with old-style Windows\n> locale names reported by the OS, and they seem to have caused us quite\n> a few problems over the years:\n>\n> db29620d \"Work around Windows locale name with non-ASCII character.\"\n> aa1d2fc5 \"Another attempt at fixing Windows Norwegian locale.\"\n> db477b69 \"Deal with yet another issue related to \"Norwegian (Bokmål)\"...\"\n> 9f12a3b9 \"Tolerate version lookup failure for old style Windows locale...\"\n>\n> ... and probably more, and also various threads about , for example,\n> \"German_German.1252\" vs \"German_Switzerland.1252\" which seem to get\n> confused or badly canonicalised or rejected somewhere in the mix.\n>\n> I hadn't focused on any of that before, being a non-Windows-user, but\n> the entire contents of win32setlocale.c supports the theory that\n> Windows' manual meant what it said when it said[1]:\n>\n> \"We do not recommend this form for locale strings embedded in\n> code or serialized to storage, because these strings are more likely\n> to be changed by an operating system update than the locale name\n> form.\"\n>\n> I suppose that was the only form available at the time the code was\n> written, so there was no choice. The question we asked ourselves\n> multiple times in the other thread was how we're supposed to get to\n> the modern BCP 47 form when creating the template databases. It looks\n> like one possibility, since Vista, is to call\n> GetUserDefaultLocaleName()[2], which doesn't appear to have been\n> discussed before on this list. That doesn't allow you to ask for the\n> default for each individual category, but I don't know if that is even\n> a concept for Windows user settings. It may be that some of the other\n> nearby functions give a better answer for some reason. But one thing\n> is clear from a test that someone kindly ran for me: it reports\n> standardised strings like \"en-NZ\", not strings like \"English_New\n> Zealand.1252\".\n>\n> No patch, but I wondered if any Windows hackers have any feedback on\n> relative sanity of trying to fix all these problems this way.\n>\n\nLast weekend I talked with one user about one interesting (and messing)\nissue. They needed to create a new database with Czech collation on Azure\nSAS. There was not any entry in pg_collation for Czech language. The reply\nfrom Microsoft support was to use CREATE DATABASE xxx TEMPLATE 'template0'\nENCODING 'utf8' LOCALE 'cs_CZ.UTF8' and it was working.\n\nRegards\n\nPavel\n\n\n> [1]\n> https://docs.microsoft.com/en-us/cpp/c-runtime-library/locale-names-languages-and-country-region-strings?view=msvc-160\n> [2]\n> https://docs.microsoft.com/en-us/windows/win32/api/winnls/nf-winnls-getuserdefaultlocalename\n>\n>\n>\n\npo 19. 4. 2021 v 7:43 odesílatel Thomas Munro <thomas.munro@gmail.com> napsal:Hi,\n\nMoving this topic into its own thread from the one about collation\nversions, because it concerns pre-existing problems, and that thread\nis long.\n\nCurrently initdb sets up template databases with old-style Windows\nlocale names reported by the OS, and they seem to have caused us quite\na few problems over the years:\n\ndb29620d \"Work around Windows locale name with non-ASCII character.\"\naa1d2fc5 \"Another attempt at fixing Windows Norwegian locale.\"\ndb477b69 \"Deal with yet another issue related to \"Norwegian (Bokmål)\"...\"\n9f12a3b9 \"Tolerate version lookup failure for old style Windows locale...\"\n\n... and probably more, and also various threads about , for example,\n\"German_German.1252\" vs \"German_Switzerland.1252\" which seem to get\nconfused or badly canonicalised or rejected somewhere in the mix.\n\nI hadn't focused on any of that before, being a non-Windows-user, but\nthe entire contents of win32setlocale.c supports the theory that\nWindows' manual meant what it said when it said[1]:\n\n\"We do not recommend this form for locale strings embedded in\ncode or serialized to storage, because these strings are more likely\nto be changed by an operating system update than the locale name\nform.\"\n\nI suppose that was the only form available at the time the code was\nwritten, so there was no choice.  The question we asked ourselves\nmultiple times in the other thread was how we're supposed to get to\nthe modern BCP 47 form when creating the template databases.  It looks\nlike one possibility, since Vista, is to call\nGetUserDefaultLocaleName()[2], which doesn't appear to have been\ndiscussed before on this list.  That doesn't allow you to ask for the\ndefault for each individual category, but I don't know if that is even\na concept for Windows user settings.  It may be that some of the other\nnearby functions give a better answer for some reason.  But one thing\nis clear from a test that someone kindly ran for me: it reports\nstandardised strings like \"en-NZ\", not strings like \"English_New\nZealand.1252\".\n\nNo patch, but I wondered if any Windows hackers have any feedback on\nrelative sanity of trying to fix all these problems this way.Last weekend I talked with one user about one interesting (and messing) issue. They needed to create a new database with Czech collation on Azure SAS. There was not any entry in pg_collation for Czech language. The reply from Microsoft support was to use CREATE DATABASE xxx TEMPLATE 'template0' ENCODING 'utf8' LOCALE 'cs_CZ.UTF8' and it was working. RegardsPavel\n\n[1] https://docs.microsoft.com/en-us/cpp/c-runtime-library/locale-names-languages-and-country-region-strings?view=msvc-160\n[2] https://docs.microsoft.com/en-us/windows/win32/api/winnls/nf-winnls-getuserdefaultlocalename", "msg_date": "Mon, 19 Apr 2021 10:52:18 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "On Mon, Apr 19, 2021 at 4:53 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> po 19. 4. 2021 v 7:43 odesílatel Thomas Munro <thomas.munro@gmail.com>\n> napsal:\n>\n>> Hi,\n>>\n>> Moving this topic into its own thread from the one about collation\n>> versions, because it concerns pre-existing problems, and that thread\n>> is long.\n>>\n>> Currently initdb sets up template databases with old-style Windows\n>> locale names reported by the OS, and they seem to have caused us quite\n>> a few problems over the years:\n>>\n>> db29620d \"Work around Windows locale name with non-ASCII character.\"\n>> aa1d2fc5 \"Another attempt at fixing Windows Norwegian locale.\"\n>> db477b69 \"Deal with yet another issue related to \"Norwegian (Bokmål)\"...\"\n>> 9f12a3b9 \"Tolerate version lookup failure for old style Windows locale...\"\n>>\n>> ... and probably more, and also various threads about , for example,\n>> \"German_German.1252\" vs \"German_Switzerland.1252\" which seem to get\n>> confused or badly canonicalised or rejected somewhere in the mix.\n>>\n>> I hadn't focused on any of that before, being a non-Windows-user, but\n>> the entire contents of win32setlocale.c supports the theory that\n>> Windows' manual meant what it said when it said[1]:\n>>\n>> \"We do not recommend this form for locale strings embedded in\n>> code or serialized to storage, because these strings are more likely\n>> to be changed by an operating system update than the locale name\n>> form.\"\n>>\n>> I suppose that was the only form available at the time the code was\n>> written, so there was no choice. The question we asked ourselves\n>> multiple times in the other thread was how we're supposed to get to\n>> the modern BCP 47 form when creating the template databases. It looks\n>> like one possibility, since Vista, is to call\n>> GetUserDefaultLocaleName()[2], which doesn't appear to have been\n>> discussed before on this list. That doesn't allow you to ask for the\n>> default for each individual category, but I don't know if that is even\n>> a concept for Windows user settings. It may be that some of the other\n>> nearby functions give a better answer for some reason. But one thing\n>> is clear from a test that someone kindly ran for me: it reports\n>> standardised strings like \"en-NZ\", not strings like \"English_New\n>> Zealand.1252\".\n>>\n>> No patch, but I wondered if any Windows hackers have any feedback on\n>> relative sanity of trying to fix all these problems this way.\n>>\n>\n> Last weekend I talked with one user about one interesting (and messing)\n> issue. They needed to create a new database with Czech collation on Azure\n> SAS. There was not any entry in pg_collation for Czech language. The reply\n> from Microsoft support was to use CREATE DATABASE xxx TEMPLATE 'template0'\n> ENCODING 'utf8' LOCALE 'cs_CZ.UTF8' and it was working.\n>\n>\n>\nMy understanding from Microsoft staff at conferences is that Azure's\nPostgreSQL SAS runs on linux, not WIndows.\n\ncheers\n\nandrew\n\nOn Mon, Apr 19, 2021 at 4:53 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:po 19. 4. 2021 v 7:43 odesílatel Thomas Munro <thomas.munro@gmail.com> napsal:Hi,\n\nMoving this topic into its own thread from the one about collation\nversions, because it concerns pre-existing problems, and that thread\nis long.\n\nCurrently initdb sets up template databases with old-style Windows\nlocale names reported by the OS, and they seem to have caused us quite\na few problems over the years:\n\ndb29620d \"Work around Windows locale name with non-ASCII character.\"\naa1d2fc5 \"Another attempt at fixing Windows Norwegian locale.\"\ndb477b69 \"Deal with yet another issue related to \"Norwegian (Bokmål)\"...\"\n9f12a3b9 \"Tolerate version lookup failure for old style Windows locale...\"\n\n... and probably more, and also various threads about , for example,\n\"German_German.1252\" vs \"German_Switzerland.1252\" which seem to get\nconfused or badly canonicalised or rejected somewhere in the mix.\n\nI hadn't focused on any of that before, being a non-Windows-user, but\nthe entire contents of win32setlocale.c supports the theory that\nWindows' manual meant what it said when it said[1]:\n\n\"We do not recommend this form for locale strings embedded in\ncode or serialized to storage, because these strings are more likely\nto be changed by an operating system update than the locale name\nform.\"\n\nI suppose that was the only form available at the time the code was\nwritten, so there was no choice.  The question we asked ourselves\nmultiple times in the other thread was how we're supposed to get to\nthe modern BCP 47 form when creating the template databases.  It looks\nlike one possibility, since Vista, is to call\nGetUserDefaultLocaleName()[2], which doesn't appear to have been\ndiscussed before on this list.  That doesn't allow you to ask for the\ndefault for each individual category, but I don't know if that is even\na concept for Windows user settings.  It may be that some of the other\nnearby functions give a better answer for some reason.  But one thing\nis clear from a test that someone kindly ran for me: it reports\nstandardised strings like \"en-NZ\", not strings like \"English_New\nZealand.1252\".\n\nNo patch, but I wondered if any Windows hackers have any feedback on\nrelative sanity of trying to fix all these problems this way.Last weekend I talked with one user about one interesting (and messing) issue. They needed to create a new database with Czech collation on Azure SAS. There was not any entry in pg_collation for Czech language. The reply from Microsoft support was to use CREATE DATABASE xxx TEMPLATE 'template0' ENCODING 'utf8' LOCALE 'cs_CZ.UTF8' and it was working. My understanding from Microsoft staff at conferences is that Azure's PostgreSQL SAS runs on  linux, not WIndows.cheersandrew", "msg_date": "Mon, 19 Apr 2021 06:52:27 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "po 19. 4. 2021 v 12:52 odesílatel Andrew Dunstan <andrew@dunslane.net>\nnapsal:\n\n>\n>\n> On Mon, Apr 19, 2021 at 4:53 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>>\n>>\n>> po 19. 4. 2021 v 7:43 odesílatel Thomas Munro <thomas.munro@gmail.com>\n>> napsal:\n>>\n>>> Hi,\n>>>\n>>> Moving this topic into its own thread from the one about collation\n>>> versions, because it concerns pre-existing problems, and that thread\n>>> is long.\n>>>\n>>> Currently initdb sets up template databases with old-style Windows\n>>> locale names reported by the OS, and they seem to have caused us quite\n>>> a few problems over the years:\n>>>\n>>> db29620d \"Work around Windows locale name with non-ASCII character.\"\n>>> aa1d2fc5 \"Another attempt at fixing Windows Norwegian locale.\"\n>>> db477b69 \"Deal with yet another issue related to \"Norwegian (Bokmål)\"...\"\n>>> 9f12a3b9 \"Tolerate version lookup failure for old style Windows\n>>> locale...\"\n>>>\n>>> ... and probably more, and also various threads about , for example,\n>>> \"German_German.1252\" vs \"German_Switzerland.1252\" which seem to get\n>>> confused or badly canonicalised or rejected somewhere in the mix.\n>>>\n>>> I hadn't focused on any of that before, being a non-Windows-user, but\n>>> the entire contents of win32setlocale.c supports the theory that\n>>> Windows' manual meant what it said when it said[1]:\n>>>\n>>> \"We do not recommend this form for locale strings embedded in\n>>> code or serialized to storage, because these strings are more likely\n>>> to be changed by an operating system update than the locale name\n>>> form.\"\n>>>\n>>> I suppose that was the only form available at the time the code was\n>>> written, so there was no choice. The question we asked ourselves\n>>> multiple times in the other thread was how we're supposed to get to\n>>> the modern BCP 47 form when creating the template databases. It looks\n>>> like one possibility, since Vista, is to call\n>>> GetUserDefaultLocaleName()[2], which doesn't appear to have been\n>>> discussed before on this list. That doesn't allow you to ask for the\n>>> default for each individual category, but I don't know if that is even\n>>> a concept for Windows user settings. It may be that some of the other\n>>> nearby functions give a better answer for some reason. But one thing\n>>> is clear from a test that someone kindly ran for me: it reports\n>>> standardised strings like \"en-NZ\", not strings like \"English_New\n>>> Zealand.1252\".\n>>>\n>>> No patch, but I wondered if any Windows hackers have any feedback on\n>>> relative sanity of trying to fix all these problems this way.\n>>>\n>>\n>> Last weekend I talked with one user about one interesting (and messing)\n>> issue. They needed to create a new database with Czech collation on Azure\n>> SAS. There was not any entry in pg_collation for Czech language. The reply\n>> from Microsoft support was to use CREATE DATABASE xxx TEMPLATE 'template0'\n>> ENCODING 'utf8' LOCALE 'cs_CZ.UTF8' and it was working.\n>>\n>>\n>>\n> My understanding from Microsoft staff at conferences is that Azure's\n> PostgreSQL SAS runs on linux, not WIndows.\n>\n\nI had different informations, but still there was something wrong because\nno czech locales was in pg_collation\n\n\n\n>\n> cheers\n>\n> andrew\n>\n\npo 19. 4. 2021 v 12:52 odesílatel Andrew Dunstan <andrew@dunslane.net> napsal:On Mon, Apr 19, 2021 at 4:53 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:po 19. 4. 2021 v 7:43 odesílatel Thomas Munro <thomas.munro@gmail.com> napsal:Hi,\n\nMoving this topic into its own thread from the one about collation\nversions, because it concerns pre-existing problems, and that thread\nis long.\n\nCurrently initdb sets up template databases with old-style Windows\nlocale names reported by the OS, and they seem to have caused us quite\na few problems over the years:\n\ndb29620d \"Work around Windows locale name with non-ASCII character.\"\naa1d2fc5 \"Another attempt at fixing Windows Norwegian locale.\"\ndb477b69 \"Deal with yet another issue related to \"Norwegian (Bokmål)\"...\"\n9f12a3b9 \"Tolerate version lookup failure for old style Windows locale...\"\n\n... and probably more, and also various threads about , for example,\n\"German_German.1252\" vs \"German_Switzerland.1252\" which seem to get\nconfused or badly canonicalised or rejected somewhere in the mix.\n\nI hadn't focused on any of that before, being a non-Windows-user, but\nthe entire contents of win32setlocale.c supports the theory that\nWindows' manual meant what it said when it said[1]:\n\n\"We do not recommend this form for locale strings embedded in\ncode or serialized to storage, because these strings are more likely\nto be changed by an operating system update than the locale name\nform.\"\n\nI suppose that was the only form available at the time the code was\nwritten, so there was no choice.  The question we asked ourselves\nmultiple times in the other thread was how we're supposed to get to\nthe modern BCP 47 form when creating the template databases.  It looks\nlike one possibility, since Vista, is to call\nGetUserDefaultLocaleName()[2], which doesn't appear to have been\ndiscussed before on this list.  That doesn't allow you to ask for the\ndefault for each individual category, but I don't know if that is even\na concept for Windows user settings.  It may be that some of the other\nnearby functions give a better answer for some reason.  But one thing\nis clear from a test that someone kindly ran for me: it reports\nstandardised strings like \"en-NZ\", not strings like \"English_New\nZealand.1252\".\n\nNo patch, but I wondered if any Windows hackers have any feedback on\nrelative sanity of trying to fix all these problems this way.Last weekend I talked with one user about one interesting (and messing) issue. They needed to create a new database with Czech collation on Azure SAS. There was not any entry in pg_collation for Czech language. The reply from Microsoft support was to use CREATE DATABASE xxx TEMPLATE 'template0' ENCODING 'utf8' LOCALE 'cs_CZ.UTF8' and it was working. My understanding from Microsoft staff at conferences is that Azure's PostgreSQL SAS runs on  linux, not WIndows.I had different informations, but still there was something wrong because no czech locales was in pg_collation  cheersandrew", "msg_date": "Mon, 19 Apr 2021 12:57:11 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "On Mon, Apr 19, 2021 at 11:52 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> My understanding from Microsoft staff at conferences is that Azure's\n> PostgreSQL SAS runs on linux, not WIndows.\n>\n\nThis is from a regular Azure Database for PostgreSQL single server:\n\npostgres=> select version();\n version\n------------------------------------------------------------\n PostgreSQL 11.6, compiled by Visual C++ build 1800, 64-bit\n(1 row)\n\nAnd this is from the new Flexible Server preview:\n\npostgres=> select version();\n version\n\n-----------------------------------------------------------------------------------------------------------------\n PostgreSQL 12.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu\n5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609, 64-bit\n(1 row)\n\nSo I guess it's a case of \"it depends\".\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nOn Mon, Apr 19, 2021 at 11:52 AM Andrew Dunstan <andrew@dunslane.net> wrote:My understanding from Microsoft staff at conferences is that Azure's PostgreSQL SAS runs on  linux, not WIndows.This is from a regular Azure Database for PostgreSQL single server:postgres=> select version();                          version                           ------------------------------------------------------------ PostgreSQL 11.6, compiled by Visual C++ build 1800, 64-bit(1 row) And this is from the new Flexible Server preview:postgres=> select version();                                                     version                                                     ----------------------------------------------------------------------------------------------------------------- PostgreSQL 12.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609, 64-bit(1 row)So I guess it's a case of \"it depends\".-- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com", "msg_date": "Mon, 19 Apr 2021 15:26:46 +0100", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": false, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "\nOn 4/19/21 10:26 AM, Dave Page wrote:\n>\n>\n> On Mon, Apr 19, 2021 at 11:52 AM Andrew Dunstan <andrew@dunslane.net\n> <mailto:andrew@dunslane.net>> wrote:\n>\n>\n> My understanding from Microsoft staff at conferences is that\n> Azure's PostgreSQL SAS runs on  linux, not WIndows.\n>\n>\n> This is from a regular Azure Database for PostgreSQL single server:\n>\n> postgres=> select version();\n>                           version                           \n> ------------------------------------------------------------\n>  PostgreSQL 11.6, compiled by Visual C++ build 1800, 64-bit\n> (1 row) \n>\n> And this is from the new Flexible Server preview:\n>\n> postgres=> select version();\n>                                                      version          \n>                                           \n> -----------------------------------------------------------------------------------------------------------------\n>  PostgreSQL 12.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu\n> 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609, 64-bit\n> (1 row)\n>\n> So I guess it's a case of \"it depends\".\n>\n\nGood to know. A year or two back at more than one conference I tried to enlist some of these folks in helping us with Windows PostgreSQL and their reply was that they knew nothing about it because they were on Linux :-) I guess things change over time.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 19 Apr 2021 12:28:16 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "On 19.04.21 07:42, Thomas Munro wrote:\n> It looks\n> like one possibility, since Vista, is to call\n> GetUserDefaultLocaleName()[2], which doesn't appear to have been\n> discussed before on this list. That doesn't allow you to ask for the\n> default for each individual category, but I don't know if that is even\n> a concept for Windows user settings.\n\npg_newlocale_from_collation() doesn't support collcollate != collctype \non Windows anyway, so that wouldn't be an issue.\n\n\n", "msg_date": "Mon, 19 Apr 2021 20:16:55 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "On Mon, Apr 19, 2021 at 05:42:51PM +1200, Thomas Munro wrote:\n> Currently initdb sets up template databases with old-style Windows\n> locale names reported by the OS, and they seem to have caused us quite\n> a few problems over the years:\n> \n> db29620d \"Work around Windows locale name with non-ASCII character.\"\n> aa1d2fc5 \"Another attempt at fixing Windows Norwegian locale.\"\n> db477b69 \"Deal with yet another issue related to \"Norwegian (Bokm�l)\"...\"\n> 9f12a3b9 \"Tolerate version lookup failure for old style Windows locale...\"\n\n> I suppose that was the only form available at the time the code was\n> written, so there was no choice.\n\nRight.\n\n> The question we asked ourselves\n> multiple times in the other thread was how we're supposed to get to\n> the modern BCP 47 form when creating the template databases. It looks\n> like one possibility, since Vista, is to call\n> GetUserDefaultLocaleName()[2]\n\n> No patch, but I wondered if any Windows hackers have any feedback on\n> relative sanity of trying to fix all these problems this way.\n\nSounds reasonable. If PostgreSQL v15 would otherwise run on Windows Server\n2003 R2, this is a good time to let that support end.\n\n\n", "msg_date": "Sat, 15 May 2021 21:29:33 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "On Sun, May 16, 2021 at 6:29 AM Noah Misch <noah@leadboat.com> wrote:\n\n> On Mon, Apr 19, 2021 at 05:42:51PM +1200, Thomas Munro wrote:\n>\n> > The question we asked ourselves\n> > multiple times in the other thread was how we're supposed to get to\n> > the modern BCP 47 form when creating the template databases. It looks\n> > like one possibility, since Vista, is to call\n> > GetUserDefaultLocaleName()[2]\n>\n> > No patch, but I wondered if any Windows hackers have any feedback on\n> > relative sanity of trying to fix all these problems this way.\n>\n> Sounds reasonable. If PostgreSQL v15 would otherwise run on Windows Server\n> 2003 R2, this is a good time to let that support end.\n>\n> The value returned by GetUserDefaultLocaleName() is a system configured\nparameter, independent of what you set with setlocale(). It might be\nreasonable for initdb but not for a backend in most cases.\n\nYou can get the locale POSIX-ish name using GetLocaleInfoEx(), but this is\nno longer recommended, because using LCIDs is no longer recommended [1].\nAlthough, this would work for legacy locales. Please find attached a POC\npatch showing this approach.\n\n[1] https://docs.microsoft.com/en-us/globalization/locale/locale-names\n\nRegards,\n\nJuan José Santamaría Flecha", "msg_date": "Wed, 15 Dec 2021 11:32:38 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "On Wed, Dec 15, 2021 at 11:32 PM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n> On Sun, May 16, 2021 at 6:29 AM Noah Misch <noah@leadboat.com> wrote:\n>> On Mon, Apr 19, 2021 at 05:42:51PM +1200, Thomas Munro wrote:\n>> > The question we asked ourselves\n>> > multiple times in the other thread was how we're supposed to get to\n>> > the modern BCP 47 form when creating the template databases. It looks\n>> > like one possibility, since Vista, is to call\n>> > GetUserDefaultLocaleName()[2]\n>>\n>> > No patch, but I wondered if any Windows hackers have any feedback on\n>> > relative sanity of trying to fix all these problems this way.\n>>\n>> Sounds reasonable. If PostgreSQL v15 would otherwise run on Windows Server\n>> 2003 R2, this is a good time to let that support end.\n>>\n> The value returned by GetUserDefaultLocaleName() is a system configured parameter, independent of what you set with setlocale(). It might be reasonable for initdb but not for a backend in most cases.\n\nAgreed. Only for initdb, and only if you didn't specify a locale name\non the command line.\n\n> You can get the locale POSIX-ish name using GetLocaleInfoEx(), but this is no longer recommended, because using LCIDs is no longer recommended [1]. Although, this would work for legacy locales. Please find attached a POC patch showing this approach.\n\nNow that museum-grade Windows has been defenestrated, we are free to\ncall GetUserDefaultLocaleName(). Here's a patch.\n\nOne thing you did in your patch that I disagree with, I think, was to\nconvert a BCP 47 name to a POSIX name early, that is, s/-/_/. I think\nwe should use the locale name exactly as Windows (really, under the\ncovers, ICU) spells it. There is only one place in the tree today\nthat really wants a POSIX locale name, and that's LC_MESSAGES,\naccessed by GNU gettext, not Windows. We already had code to cope\nwith that.\n\nI think we should also convert to POSIX format when making the\ncollname in your pg_import_system_collations() proposal, so that\nCOLLATE \"en_US\" works (= a SQL identifier), but that's another\nthread[1]. I don't think we should do it in collcollate or\ndatcollate, which is a string for the OS to interpret.\n\nWith my garbage collector hat on, I would like to rip out all of the\nsupport for traditional locale names, eventually. Deleting kludgy\ncode is easy and fun -- 0002 is a first swing at that -- but there\nremains an important unanswered question. How should someone\npg_upgrade a \"English_Canada.1521\" cluster if we now reject that name?\n We'd need to do a conversion to \"en-CA\", or somehow tell the user to.\nHmmmm.\n\n[1] https://www.postgresql.org/message-id/flat/CAC%2BAXB0WFjJGL1n33bRv8wsnV-3PZD0A7kkjJ2KjPH0dOWqQdg%40mail.gmail.com", "msg_date": "Tue, 19 Jul 2022 10:58:41 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "On Tue, Jul 19, 2022 at 10:58 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Here's a patch.\n\nI added this to the next commitfest, and cfbot promptly told me about\nsome warnings I needed to fix. That'll teach me to post a patch\ntested with \"ci-os-only: windows\". Looking more closely at some error\nmessages that report GetLastError() where I'd mixed up %d and %lu, I\nsee also that I didn't quite follow existing conventions for wording\nwhen reporting Windows error numbers, so I fixed that too.\n\nIn the \"startcreate\" step on CI you can see that it says:\n\nThe database cluster will be initialized with locale \"en-US\".\nThe default database encoding has accordingly been set to \"WIN1252\".\nThe default text search configuration will be set to \"english\".\n\nAs for whether \"accordingly\" still applies, by the logic of of\nwin32_langinfo()... Windows still considers WIN1252 to be the default\nANSI code page for \"en-US\", though it'd work with UTF-8 too. I'm not\nsure what to make of that. The goal here was to give Windows users\ngood defaults, but WIN1252 is probably not what most people actually\nwant. Hmph.", "msg_date": "Tue, 19 Jul 2022 14:46:48 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "On Tue, Jul 19, 2022 at 12:59 AM Thomas Munro <thomas.munro@gmail.com>\nwrote:\n\n> Now that museum-grade Windows has been defenestrated, we are free to\n> call GetUserDefaultLocaleName(). Here's a patch.\n>\n\nThis LGTM.\n\n>\n> I think we should also convert to POSIX format when making the\n> collname in your pg_import_system_collations() proposal, so that\n> COLLATE \"en_US\" works (= a SQL identifier), but that's another\n> thread[1]. I don't think we should do it in collcollate or\n> datcollate, which is a string for the OS to interpret.\n>\n\nThat thread has been split [1], but that is how the current version behaves.\n\n>\n> With my garbage collector hat on, I would like to rip out all of the\n> support for traditional locale names, eventually. Deleting kludgy\n> code is easy and fun -- 0002 is a first swing at that -- but there\n> remains an important unanswered question. How should someone\n> pg_upgrade a \"English_Canada.1521\" cluster if we now reject that name?\n> We'd need to do a conversion to \"en-CA\", or somehow tell the user to.\n> Hmmmm.\n>\n\nIs there a safe way to do that in pg_upgrade or would we be forcing users\nto pg_dump into the new cluster?\n\n[1]\nhttps://www.postgresql.org/message-id/flat/0050ec23-34d9-2765-9015-98c04f0e18ac%40postgrespro.ru\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Tue, Jul 19, 2022 at 12:59 AM Thomas Munro <thomas.munro@gmail.com> wrote:Now that museum-grade Windows has been defenestrated, we are free to\ncall GetUserDefaultLocaleName().  Here's a patch.This LGTM. \nI think we should also convert to POSIX format when making the\ncollname in your pg_import_system_collations() proposal, so that\nCOLLATE \"en_US\" works (= a SQL identifier), but that's another\nthread[1].  I don't think we should do it in collcollate or\ndatcollate, which is a string for the OS to interpret.That thread has been split [1], but that is how the current version behaves.\n\nWith my garbage collector hat on, I would like to rip out all of the\nsupport for traditional locale names, eventually.  Deleting kludgy\ncode is easy and fun -- 0002 is a first swing at that -- but there\nremains an important unanswered question.  How should someone\npg_upgrade a \"English_Canada.1521\" cluster if we now reject that name?\n We'd need to do a conversion to \"en-CA\", or somehow tell the user to.\nHmmmm. Is there a safe way to do that in pg_upgrade or would we be forcing users to pg_dump into the new cluster? [1] https://www.postgresql.org/message-id/flat/0050ec23-34d9-2765-9015-98c04f0e18ac%40postgrespro.ruRegards,Juan José Santamaría Flecha", "msg_date": "Wed, 20 Jul 2022 10:34:38 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "On Tue, Jul 19, 2022 at 4:47 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> As for whether \"accordingly\" still applies, by the logic of of\n> win32_langinfo()... Windows still considers WIN1252 to be the default\n> ANSI code page for \"en-US\", though it'd work with UTF-8 too. I'm not\n> sure what to make of that. The goal here was to give Windows users\n> good defaults, but WIN1252 is probably not what most people actually\n> want. Hmph.\n>\n\nStill, WIN1252 is not the wrong answer for what we are asking. Even if you\nenable UTF-8 support [1], the system will use the current default Windows\nANSI code page (ACP) for the locale and UTF-8 for the code page.\n\n[1]\nhttps://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/setlocale-wsetlocale?view=msvc-170\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Tue, Jul 19, 2022 at 4:47 AM Thomas Munro <thomas.munro@gmail.com> wrote:As for whether \"accordingly\" still applies, by the logic of of\nwin32_langinfo()...  Windows still considers WIN1252 to be the default\nANSI code page for \"en-US\", though it'd work with UTF-8 too.  I'm not\nsure what to make of that.  The goal here was to give Windows users\ngood defaults, but WIN1252 is probably not what most people actually\nwant.  Hmph.Still, WIN1252 is not the wrong answer for what we are asking. Even if you enable UTF-8 support [1], the system will use the current default Windows ANSI code page (ACP) for the locale and UTF-8 for the code page.[1] https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/setlocale-wsetlocale?view=msvc-170Regards,Juan José Santamaría Flecha", "msg_date": "Wed, 20 Jul 2022 12:26:50 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "On Wed, Jul 20, 2022 at 10:27 PM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n> On Tue, Jul 19, 2022 at 4:47 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> As for whether \"accordingly\" still applies, by the logic of of\n>> win32_langinfo()... Windows still considers WIN1252 to be the default\n>> ANSI code page for \"en-US\", though it'd work with UTF-8 too. I'm not\n>> sure what to make of that. The goal here was to give Windows users\n>> good defaults, but WIN1252 is probably not what most people actually\n>> want. Hmph.\n>\n>\n> Still, WIN1252 is not the wrong answer for what we are asking. Even if you enable UTF-8 support [1], the system will use the current default Windows ANSI code page (ACP) for the locale and UTF-8 for the code page.\n\nI'm still confused about what that means. Suppose we decided to\ninsist by adding a \".UTF-8\" suffix to the name, as that page says we\ncan now that we're on Windows 10+, when building the default locale\nname (see experimental 0002 patch, attached). It initially seemed to\nhave the right effect:\n\nThe database cluster will be initialized with locale \"en-US.UTF-8\".\nThe default database encoding has accordingly been set to \"UTF8\".\nThe default text search configuration will be set to \"english\".\n\nBut then the Turkish i test in contrib/citext/sql/citext_utf8.sql failed[1]:\n\nSELECT 'i'::citext = 'İ'::citext AS t;\n t\n ---\n- t\n+ f\n (1 row)\n\nAbout the pg_upgrade problem, maybe it's OK ... existing old format\nnames should continue to work, but we can still remove the weird code\nthat does locale name tweaking, right? pg_upgraded databases should\ncontain fixed names (ie that were fixed by old initdb so should\ncontinue to work), and new clusters will get BCP 47 names.\n\nI don't really know, I was just playing with rough ideas by sending\npatches to CI here...\n\n[1] https://cirrus-ci.com/task/6423238052937728", "msg_date": "Wed, 20 Jul 2022 23:44:04 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "On Wed, Jul 20, 2022 at 1:44 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Wed, Jul 20, 2022 at 10:27 PM Juan José Santamaría Flecha\n> <juanjo.santamaria@gmail.com> wrote:\n> > Still, WIN1252 is not the wrong answer for what we are asking. Even if\n> you enable UTF-8 support [1], the system will use the current default\n> Windows ANSI code page (ACP) for the locale and UTF-8 for the code page.\n>\n> I'm still confused about what that means. Suppose we decided to\n> insist by adding a \".UTF-8\" suffix to the name, as that page says we\n> can now that we're on Windows 10+, when building the default locale\n> name (see experimental 0002 patch, attached). It initially seemed to\n> have the right effect:\n>\n> The database cluster will be initialized with locale \"en-US.UTF-8\".\n> The default database encoding has accordingly been set to \"UTF8\".\n> The default text search configuration will be set to \"english\".\n>\n> Let me try to explain this using the \"Beta: Use Unicode UTF-8 for\nworldwide language support\" option [1].\n\n- Currently in a system with the language settings of \"English_United\nStates\" and that option disabled, when executing initdb you get:\n\nThe database cluster will be initialized with locale \"English_United\nStates.1252\".\nThe default database encoding has accordingly been set to \"WIN1252\".\nThe default text search configuration will be set to \"english\".\n\nAnd as a test for psql:\n\nSET lc_time='tr_tr.utf8';\nSET\nSELECT to_char('2000-2-01'::date, 'tmmonth');\nERROR: character with byte sequence 0xc5 0x9f in encoding \"UTF8\" has no\nequivalent in encoding \"WIN1252\"\n\nWe get this error even if the database encoding is UTF8, and is caused by\nthe tr_tr locales being encoded in WIN1254. We can discuss this in another\nthread, and I can propose a patch.\n\n- If we enable the UTF-8 support option, then the same test goes as:\n\nThe database cluster will be initialized with locale \"English_United\nStates.utf8\".\nThe default database encoding has accordingly been set to \"UTF8\".\nThe default text search configuration will be set to \"english\".\n\nAnd for psql:\n\nSET lc_time='tr_tr.utf8';\nSET\nSELECT to_char('2000-2-01'::date, 'tmmonth');\n to_char\n---------\n şubat\n(1 row)\n\nIn this case the Windows locales are actually UTF8 encoded.\n\nTL;DR; What I want to show through this example is that Windows ACP is not\nmodified by setlocale(), it can only be done through the Windows registry\nand only in recent releases.\n\n\n> But then the Turkish i test in contrib/citext/sql/citext_utf8.sql\n> failed[1]:\n>\n> SELECT 'i'::citext = 'İ'::citext AS t;\n> t\n> ---\n> - t\n> + f\n> (1 row)\n>\n> This is current state of affairs:\n\n- Windows:\n\nSELECT U&'\\0131' latin_small_dotless,U&'\\0069' latin_small\n,U&'\\0049' latin_capital, lower(U&'\\0049')\n,U&'\\0130' latin_capital_dotted, lower(U&'\\0130');\n latin_small_dotless | latin_small | latin_capital | lower |\nlatin_capital_dotted | lower\n---------------------+-------------+---------------+-------+----------------------+-------\n ı | i | I | i | İ\n | İ\n\n- Linux:\n\nSELECT U&'\\0131' latin_small_dotless,U&'\\0069' latin_small\n,U&'\\0049' latin_capital, lower(U&'\\0049')\n,U&'\\0130' latin_capital_dotted, lower(U&'\\0130');\n latin_small_dotless | latin_small | latin_capital | lower |\nlatin_capital_dotted | lower\n---------------------+-------------+---------------+-------+----------------------+-------\n ı | i | I | i | İ\n | i\n\nLatin_capital_dotted doesn't have the same lower value.\n\n[1]\nhttps://stackoverflow.com/questions/56419639/what-does-beta-use-unicode-utf-8-for-worldwide-language-support-actually-do\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Wed, Jul 20, 2022 at 1:44 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Wed, Jul 20, 2022 at 10:27 PM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:> Still, WIN1252 is not the wrong answer for what we are asking. Even if you enable UTF-8 support [1], the system will use the current default Windows ANSI code page (ACP) for the locale and UTF-8 for the code page.\n\nI'm still confused about what that means.  Suppose we decided to\ninsist by adding a \".UTF-8\" suffix to the name, as that page says we\ncan now that we're on Windows 10+, when building the default locale\nname (see experimental 0002 patch, attached).  It initially seemed to\nhave the right effect:\n\nThe database cluster will be initialized with locale \"en-US.UTF-8\".\nThe default database encoding has accordingly been set to \"UTF8\".\nThe default text search configuration will be set to \"english\".\nLet me try to explain this using the \"Beta: Use Unicode UTF-8 for worldwide language support\" option [1]. - Currently in a system with the language settings of \"English_United States\" and that option disabled, when executing initdb you get:The database cluster will be initialized with locale \"English_United States.1252\".The default database encoding has accordingly been set to \"WIN1252\".The default text search configuration will be set to \"english\".And as a test for psql:SET lc_time='tr_tr.utf8';SETSELECT to_char('2000-2-01'::date, 'tmmonth');ERROR:  character with byte sequence 0xc5 0x9f in encoding \"UTF8\" has no equivalent in encoding \"WIN1252\"We get this error even if the database encoding is UTF8, and is caused by the tr_tr locales being encoded in WIN1254. We can discuss this in another thread, and I can propose a patch.- If we enable the UTF-8 support option, then the same test goes as:The database cluster will be initialized with locale \"English_United States.utf8\".The default database encoding has accordingly been set to \"UTF8\".The default text search configuration will be set to \"english\".And for psql:SET lc_time='tr_tr.utf8';SETSELECT to_char('2000-2-01'::date, 'tmmonth'); to_char--------- şubat(1 row)In this case the Windows locales are actually UTF8 encoded.TL;DR; What I want to show through this example is that Windows ACP is not modified by setlocale(), it can only be done through the Windows registry and only in recent releases. \nBut then the Turkish i test in contrib/citext/sql/citext_utf8.sql failed[1]:\n\nSELECT 'i'::citext = 'İ'::citext AS t;\n t\n ---\n- t\n+ f\n (1 row)This is current state of affairs:- Windows:SELECT U&'\\0131' latin_small_dotless,U&'\\0069' latin_small\t,U&'\\0049' latin_capital, lower(U&'\\0049')\t,U&'\\0130' latin_capital_dotted, lower(U&'\\0130'); latin_small_dotless | latin_small | latin_capital | lower | latin_capital_dotted | lower---------------------+-------------+---------------+-------+----------------------+------- ı                   | i           | I             | i     | İ                    | İ- Linux:SELECT U&'\\0131' latin_small_dotless,U&'\\0069' latin_small\t,U&'\\0049' latin_capital, lower(U&'\\0049')\t,U&'\\0130' latin_capital_dotted, lower(U&'\\0130'); latin_small_dotless | latin_small | latin_capital | lower | latin_capital_dotted | lower---------------------+-------------+---------------+-------+----------------------+------- ı                   | i           | I             | i     | İ                    | iLatin_capital_dotted doesn't have the same lower value. [1] https://stackoverflow.com/questions/56419639/what-does-beta-use-unicode-utf-8-for-worldwide-language-support-actually-doRegards,Juan José Santamaría Flecha", "msg_date": "Fri, 22 Jul 2022 13:58:54 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "On Fri, Jul 22, 2022 at 11:59 PM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n> TL;DR; What I want to show through this example is that Windows ACP is not modified by setlocale(), it can only be done through the Windows registry and only in recent releases.\n\nThanks, that was helpful, and so was that SO link.\n\nSo it sounds like I should forget about the v3-0002 patch, but the\nv3-0001 and v3-0003 patches might have a future. And it sounds like\nwe might need to investigate maybe defending ourselves against the ACP\nbeing different than what we expect (ie not matching the database\nencoding)? Did I understand correctly that you're looking into that?\n\n\n", "msg_date": "Fri, 29 Jul 2022 15:33:50 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "On Fri, Jul 29, 2022 at 3:33 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Jul 22, 2022 at 11:59 PM Juan José Santamaría Flecha\n> <juanjo.santamaria@gmail.com> wrote:\n> > TL;DR; What I want to show through this example is that Windows ACP is not modified by setlocale(), it can only be done through the Windows registry and only in recent releases.\n>\n> Thanks, that was helpful, and so was that SO link.\n>\n> So it sounds like I should forget about the v3-0002 patch, but the\n> v3-0001 and v3-0003 patches might have a future. And it sounds like\n> we might need to investigate maybe defending ourselves against the ACP\n> being different than what we expect (ie not matching the database\n> encoding)? Did I understand correctly that you're looking into that?\n\nI'm going to withdraw this entry. The sooner we get something like\n0001 into a release, the sooner the world will be rid of PostgreSQL\nclusters initialised with the bad old locale names that the manual\nvery clearly tells you not to use for databases.... but I don't\nunderstand this ACP/registry vs database encoding stuff and how it\nrelates to the use of BCP47 locale names, which puts me off changing\nanything until we do.\n\n\n", "msg_date": "Fri, 23 Dec 2022 17:36:15 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "Another country has changed its name, and a Windows OS update has\nagain broken every PostgreSQL cluster in that whole country[1] (or at\nleast those that had accepted initdb's default choice of locale,\nprobably most). Let's get to the bottom of this, because otherwise it\nis simply going to keep happening, causing administrative pain for a\nlot of people.\n\nHere is a rebase of the basic patch I proposed last time, and a\nre-statement of what we know:\n\n1. initdb chooses a default locale using a technique that gives you\nan unstable (\"Czech Republic\"->\"Czechia\", \"Turkey\"->\"Türkiye\"),\nnon-ASCII (\"Norwegian (Bokmål)\") string that we are warned we should\nnot store anywhere. We store it, and then later it is not recognised.\nInstead we should select an IETF BCP 47 locale name, based on stable\nISO country and language codes, like \"en-US\", \"tr-TR\" etc. Here is\nthe patch to teach initdb to use that, unchanged from v3 except that I\ntweaked the docs a bit.\n\n2. In Windows 10+ it is now also possible to put \".UTF-8\" on the end\nof locale names. I couldn't figure out whether we should do that, and\nwhat effect it has on ctypes -- apparently not the effect I expected\n(see upthread). Was our UTF-8 support on Windows already broken, and\nthis new \".UTF-8\" thing is just a new way to reach that brokenness?\nIs it OK to continue to choose the \"legacy\" single byte encodings by\ndefault on that OS, and consider that a separate topic for separate\nresearch?\n\n3. It is not clear to me how we should deal with pg_upgrade.\nEventually we want all of the old-school names to fade away, and\npg_upgrade would need to be part of that. Perhaps there is some API\nthat can be used to translate to the new canonical forms without us\nhaving to maintain translation tables and other messiness in our tree.\n\n4. Eventually we should probably ban non-ASCII characters from\nentering the relevant catalogues (they are shared, so their encoding\nis undefined except that they must be a superset of ASCII), and delete\nall the old win32setlocale.c kludges, after we reach a point where\neveryone should be using exclusively BCP 47.\n\n[1] https://www.postgresql.org/message-id/flat/18196-b10f93dfbde3d7db%40postgresql.org", "msg_date": "Mon, 20 Nov 2023 12:33:07 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "I clicked \"Trigger\" to get a Mingw test run of this, and it failed[1].\nI see why: our function win32_langinfo() believes that it shouldn't\ncall GetLocaleInfoEx() on non-MSVC compilers, so we see 'initdb:\nerror: could not find suitable encoding for locale \"en-US\"'. I think\nit has fallback code that parses the \".1252\" or whatever on the end of\nthe name, but \"en-US\" hasn't got one. I don't know the first thing\nabout Mingw but it looks like a declaration for that function arrived\n6 years ago[2], and deleting the \"#if defined(_MSC_VER)\" fixes the\nproblem and the tests pass[3]. As far as I know, we don't support any\nMingw but the very latest: it's not a target with real users who have\nversion requirements, it's just a developer [in]convenience, so if it\npasses on CI and whatever MSYS version \"fairywren\" runs in the build\nfarm right now, that should be enough.\n\nI could just do that in this patch, but I suppose that also means that\nsomeone needs to go through pg_locale.c and other places that test\n_MSC_VER not because they actually care about the compiler but because\nthey want to detect some crusty old Mingw version, and see what else\ncan be deleted as a result, possibly including a lot of fallback code.\nIt feels like a separate cleanup for a separate patch.\n\n[1] https://cirrus-ci.com/task/5301814774464512\n[2] https://github.com/mirror/mingw-w64/blame/eff726c461e09f35eeaed125a3570fa5f807f02b/mingw-w64-tools/widl/include/winnls.h#L931\n[3] https://cirrus-ci.com/task/6558569718349824\n\n\n", "msg_date": "Mon, 20 Nov 2023 14:56:45 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "Here is a thought that occurs to me, as I follow along with Jeff\nDavis's evolving proposals for built-in collations and ctypes: What\nwould stop us from dropping support for the libc (sic) provider on\nWindows? That may sound radical and likely to cause extra work for\npeople on upgrade, but how does that compare to the pain of keeping\nthis barely maintained code in the tree? Suppose the idea in this\nthread goes ahead and we get people to transition to the modern locale\nnames: there is non-zero transitional/upgrade pain there too. How\ndelicious it would be to just nuke the whole thing from orbit, and\nkeep only cross-platform code that is maintained with enthusiasm by\nactive hackers.\n\nThat's probably a little extreme, but it's the direction my thoughts\nstart to go in when confronting the realisation that it's up to us\n[Unix hackers making drive-by changes], no one is coming to help us\n[from the Windows user community].\n\nI've even heard others talk about dropping Windows completely, due to\nthe maintenance imbalance. This would be somewhat more fine grained.\n(One could use a similar argument to drop non-NTFS filesystems and\nturn on POSIX-mode file links, to end that other locus of struggle.)\n\n\n", "msg_date": "Thu, 14 Dec 2023 08:58:51 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "Ertan Küçükoglu offered to try to review and test this, so here's a rebase.\n\nSome notes:\n\n* it turned out that the Turkish i/I test problem I mentioned earlier\nin this thread[1] was just always broken on Windows, we just didn't\never test with UTF-8 before Meson took over; it's skipped now, see\ncommit cff4e5a3[2]\n\n* it seems that you can't actually put encodings like .1252 on the end\n(.UTF-8 must be a special case); I don't know if we should look into a\nbetter UTF-8 mode for modern Windows, but that'd be a separate project\n\n* this patch only benefits people who run initdb.exe without\nexplicitly specifying a locale; probably a good number of real systems\nin the wild actually use EDB's graphical installer which initialises a\ncluster and has its own way of choosing the locale, as discussed in\nErtan's thread[3]\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGJZskvCh%3DQm75UkHrY6c1QZUuC92Po9rponj1BbLmcMEA%40mail.gmail.com#3a00c08214a4285d2f3c4297b0ac2be2\n[2] https://github.com/postgres/postgres/commit/cff4e5a3\n[3] https://www.postgresql.org/message-id/flat/CAH2i4ydECHZPxEBB7gtRG3vROv7a0d3tqAFXzcJWQ9hRsc1znQ%40mail.gmail.com", "msg_date": "Mon, 22 Jul 2024 14:51:49 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "Hi,\n\nI am a complete noob about PostgreSQL development.\nI don't know about the PostgreSQL CI system.\nI will be needing some help as to how to do the tests.\nI have access to different Windows OSes (v10, Server 2022 mainly).\nThese systems can be set to English or Turkish locales if needed.\nI can also add new Windows versions if needed.\nI do not know how to use patch files. I am also not sure what tests I\nshould do.\nDo I need to set up a Windows build system for PostgreSQL CI?\nWill I download some files (EXE, etc) ready for testing? Copy them over an\nexisting installation for testing?\n\nThanks for your help.\n\nRegards,\nErtan\n\nThomas Munro <thomas.munro@gmail.com>, 22 Tem 2024 Pzt, 05:52 tarihinde\nşunu yazdı:\n\n> Ertan Küçükoglu offered to try to review and test this, so here's a rebase.\n>\n> Some notes:\n>\n> * it turned out that the Turkish i/I test problem I mentioned earlier\n> in this thread[1] was just always broken on Windows, we just didn't\n> ever test with UTF-8 before Meson took over; it's skipped now, see\n> commit cff4e5a3[2]\n>\n> * it seems that you can't actually put encodings like .1252 on the end\n> (.UTF-8 must be a special case); I don't know if we should look into a\n> better UTF-8 mode for modern Windows, but that'd be a separate project\n>\n> * this patch only benefits people who run initdb.exe without\n> explicitly specifying a locale; probably a good number of real systems\n> in the wild actually use EDB's graphical installer which initialises a\n> cluster and has its own way of choosing the locale, as discussed in\n> Ertan's thread[3]\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/CA%2BhUKGJZskvCh%3DQm75UkHrY6c1QZUuC92Po9rponj1BbLmcMEA%40mail.gmail.com#3a00c08214a4285d2f3c4297b0ac2be2\n> [2] https://github.com/postgres/postgres/commit/cff4e5a3\n> [3]\n> https://www.postgresql.org/message-id/flat/CAH2i4ydECHZPxEBB7gtRG3vROv7a0d3tqAFXzcJWQ9hRsc1znQ%40mail.gmail.com\n>\n\nHi,I am a complete noob about PostgreSQL development.\n\nI don't know about the PostgreSQL CI system.I will be needing some help as to how to do the tests.I have access to different Windows OSes (v10, Server 2022 mainly).These systems can be set to English or Turkish locales if needed.I can also add new Windows versions if needed.I do not know how to use patch files. I am also not sure what tests I should do.Do I need to set up a Windows build system for PostgreSQL CI?Will I download some files (EXE, etc) ready for testing? Copy them over an existing installation for testing?Thanks for your help.Regards,ErtanThomas Munro <thomas.munro@gmail.com>, 22 Tem 2024 Pzt, 05:52 tarihinde şunu yazdı:Ertan Küçükoglu offered to try to review and test this, so here's a rebase.\n\nSome notes:\n\n* it turned out that the Turkish i/I test problem I mentioned earlier\nin this thread[1] was just always broken on Windows, we just didn't\never test with UTF-8 before Meson took over; it's skipped now, see\ncommit cff4e5a3[2]\n\n* it seems that you can't actually put encodings like .1252 on the end\n(.UTF-8 must be a special case); I don't know if we should look into a\nbetter UTF-8 mode for modern Windows, but that'd be a separate project\n\n* this patch only benefits people who run initdb.exe without\nexplicitly specifying a locale; probably a good number of real systems\nin the wild actually use EDB's graphical installer which initialises a\ncluster and has its own way of choosing the locale, as discussed in\nErtan's thread[3]\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGJZskvCh%3DQm75UkHrY6c1QZUuC92Po9rponj1BbLmcMEA%40mail.gmail.com#3a00c08214a4285d2f3c4297b0ac2be2\n[2] https://github.com/postgres/postgres/commit/cff4e5a3\n[3] https://www.postgresql.org/message-id/flat/CAH2i4ydECHZPxEBB7gtRG3vROv7a0d3tqAFXzcJWQ9hRsc1znQ%40mail.gmail.com", "msg_date": "Mon, 22 Jul 2024 11:04:00 +0300", "msg_from": "=?UTF-8?B?RXJ0YW4gS8O8w6fDvGtvZ2x1?= <ertan.kucukoglu@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "Hello Thomas,\n\nCan you please list down some of the use cases for the patch ? Other than\nTurkish, does this patch have an impact on other locales too ?\n\n\nRegards,\nZaid\n\n\nOn Mon, Jul 22, 2024 at 7:52 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> Ertan Küçükoglu offered to try to review and test this, so here's a rebase.\n>\n> Some notes:\n>\n> * it turned out that the Turkish i/I test problem I mentioned earlier\n> in this thread[1] was just always broken on Windows, we just didn't\n> ever test with UTF-8 before Meson took over; it's skipped now, see\n> commit cff4e5a3[2]\n>\n> * it seems that you can't actually put encodings like .1252 on the end\n> (.UTF-8 must be a special case); I don't know if we should look into a\n> better UTF-8 mode for modern Windows, but that'd be a separate project\n>\n> * this patch only benefits people who run initdb.exe without\n> explicitly specifying a locale; probably a good number of real systems\n> in the wild actually use EDB's graphical installer which initialises a\n> cluster and has its own way of choosing the locale, as discussed in\n> Ertan's thread[3]\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/CA%2BhUKGJZskvCh%3DQm75UkHrY6c1QZUuC92Po9rponj1BbLmcMEA%40mail.gmail.com#3a00c08214a4285d2f3c4297b0ac2be2\n> [2] https://github.com/postgres/postgres/commit/cff4e5a3\n> [3]\n> https://www.postgresql.org/message-id/flat/CAH2i4ydECHZPxEBB7gtRG3vROv7a0d3tqAFXzcJWQ9hRsc1znQ%40mail.gmail.com\n>\n\nHello Thomas,Can you please list down some of the use cases for the patch ? Other than Turkish, does this patch have an impact on other locales too ?Regards,ZaidOn Mon, Jul 22, 2024 at 7:52 AM Thomas Munro <thomas.munro@gmail.com> wrote:Ertan Küçükoglu offered to try to review and test this, so here's a rebase.\n\nSome notes:\n\n* it turned out that the Turkish i/I test problem I mentioned earlier\nin this thread[1] was just always broken on Windows, we just didn't\never test with UTF-8 before Meson took over; it's skipped now, see\ncommit cff4e5a3[2]\n\n* it seems that you can't actually put encodings like .1252 on the end\n(.UTF-8 must be a special case); I don't know if we should look into a\nbetter UTF-8 mode for modern Windows, but that'd be a separate project\n\n* this patch only benefits people who run initdb.exe without\nexplicitly specifying a locale; probably a good number of real systems\nin the wild actually use EDB's graphical installer which initialises a\ncluster and has its own way of choosing the locale, as discussed in\nErtan's thread[3]\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGJZskvCh%3DQm75UkHrY6c1QZUuC92Po9rponj1BbLmcMEA%40mail.gmail.com#3a00c08214a4285d2f3c4297b0ac2be2\n[2] https://github.com/postgres/postgres/commit/cff4e5a3\n[3] https://www.postgresql.org/message-id/flat/CAH2i4ydECHZPxEBB7gtRG3vROv7a0d3tqAFXzcJWQ9hRsc1znQ%40mail.gmail.com", "msg_date": "Mon, 22 Jul 2024 13:37:53 +0500", "msg_from": "Zaid Shabbir <zaidshabbir@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "On Mon, Jul 22, 2024 at 8:38 PM Zaid Shabbir <zaidshabbir@gmail.com> wrote:\n> Can you please list down some of the use cases for the patch ? Other than Turkish, does this patch have an impact on other locales too ?\n\nHi Zaid,\n\nYes, initdb.exe would use BCP47 codes by default for all languages.\nWho knows which country will change its name next?\n\n From a quick search of other recent cases: Czech Republic -> Czechia,\nSwaziland -> Eswatini, Cape Verde -> Cabo Verde, and more, plus others\nthat we have older records of in the mailing list that seemed to\nchange in some minor technical way: Macau, Hong Hong, Norwegian etc.\nThe Windows manual says:\n\n\"We do not recommend this form for locale strings embedded in\ncode or serialized to storage, because these strings are more likely\nto be changed by an operating system update than the locale name\nform.\"\n\nIt's pretty bad for our users when it happens and the Windows locale\nname changes: a database cluster that suddenly can't start, and even\nafter you've figured out why and adjusted the references in\npostgresql.conf, you still can't connect. There is also the problem\nthat some of the old full names have non-ASCII characters (Türkiye,\nSão Tomé and Príncipe, Curaçao, Côte d'Ivoire, Åland) which is bad at\nleast in theory because we use the string in times and places when it\nit is not clear what the encoding the name itself has.\n\nI don't use Windows myself, I've just been watching this train wreck\nreplaying in a loop for long enough. Clearly it's going to take some\ntime to wean the user community off the unstable names, and it struck\nme that the default is probably the main source of them in new\nclusters, hence this patch.\n\n\n", "msg_date": "Mon, 22 Jul 2024 22:12:06 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "On Mon, Jul 22, 2024 at 8:04 PM Ertan Küçükoglu\n<ertan.kucukoglu@gmail.com> wrote:\n> I am a complete noob about PostgreSQL development.\n> I don't know about the PostgreSQL CI system.\n> I will be needing some help as to how to do the tests.\n> I have access to different Windows OSes (v10, Server 2022 mainly).\n> These systems can be set to English or Turkish locales if needed.\n> I can also add new Windows versions if needed.\n> I do not know how to use patch files. I am also not sure what tests I should do.\n> Do I need to set up a Windows build system for PostgreSQL CI?\n> Will I download some files (EXE, etc) ready for testing? Copy them over an existing installation for testing?\n\nSorry, I didn't mean to put you on the spot :-) Yeah you'd need to\ninstall a compiler, various libraries and tools to be able to build\nform source with a patch. Unfortunately I'm not the best person to\nexplain how to do that on Windows as I don't use it. Honestly it\nmight be a bit too much new stuff to figure out at once just to test\nthis small patch. What I'd be hoping for is confirmation that there\nare no weird unintended consequences or problems I'm not seeing since\nI'm writing blind patches based on documentation only, but it's\nprobably too much to ask to figure out the whole development\nenvironment and then go on an open ended expedition looking for\nunknown problems.\n\n\n", "msg_date": "Mon, 22 Jul 2024 23:01:11 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com>, 22 Tem 2024 Pzt, 14:00 tarihinde\nşunu yazdı:\n\n> Sorry, I didn't mean to put you on the spot :-) Yeah you'd need to\n> install a compiler, various libraries and tools to be able to build\n> form source with a patch. Unfortunately I'm not the best person to\n> explain how to do that on Windows as I don't use it. Honestly it\n> might be a bit too much new stuff to figure out at once just to test\n> this small patch. What I'd be hoping for is confirmation that there\n> are no weird unintended consequences or problems I'm not seeing since\n> I'm writing blind patches based on documentation only, but it's\n> probably too much to ask to figure out the whole development\n> environment and then go on an open ended expedition looking for\n> unknown problems.\n>\n\nI already installed Visual Studio 2022 with C++ support as suggested in\nhttps://www.postgresql.org/docs/current/install-windows-full.html\nI cloned codes in the system.\nBut, I cannot find any \"src/tools/msvc\" directory. It is missing.\nDocument states I need everything in there\n\"The tools for building using Visual C++ or Platform SDK are in the\nsrc\\tools\\msvc directory.\"\nIt seems I will need help setting up the build environment.\n\nThomas Munro <thomas.munro@gmail.com>, 22 Tem 2024 Pzt, 14:00 tarihinde şunu yazdı:Sorry, I didn't mean to put you on the spot :-)  Yeah you'd need to\ninstall a compiler, various libraries and tools to be able to build\nform source with a patch.  Unfortunately I'm not the best person to\nexplain how to do that on Windows as I don't use it.  Honestly it\nmight be a bit too much new stuff to figure out at once just to test\nthis small patch.  What I'd be hoping for is confirmation that there\nare no weird unintended consequences or problems I'm not seeing since\nI'm writing blind patches based on documentation only, but it's\nprobably too much to ask to figure out the whole development\nenvironment and then go on an open ended expedition looking for\nunknown problems.I already installed Visual Studio 2022 with C++ support as suggested in https://www.postgresql.org/docs/current/install-windows-full.htmlI cloned codes in the system.But, I cannot find any \"src/tools/msvc\" directory. It is missing.Document states I need everything in there\"The tools for building using Visual C++ or Platform SDK are in the src\\tools\\msvc directory.\"It seems I will need help setting up the build environment.", "msg_date": "Mon, 22 Jul 2024 15:52:24 +0300", "msg_from": "=?UTF-8?B?RXJ0YW4gS8O8w6fDvGtvZ2x1?= <ertan.kucukoglu@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "\nOn 2024-07-21 Su 10:51 PM, Thomas Munro wrote:\n> Ertan Küçükoglu offered to try to review and test this, so here's a rebase.\n>\n> Some notes:\n>\n> * it turned out that the Turkish i/I test problem I mentioned earlier\n> in this thread[1] was just always broken on Windows, we just didn't\n> ever test with UTF-8 before Meson took over; it's skipped now, see\n> commit cff4e5a3[2]\n>\n> * it seems that you can't actually put encodings like .1252 on the end\n> (.UTF-8 must be a special case); I don't know if we should look into a\n> better UTF-8 mode for modern Windows, but that'd be a separate project\n>\n> * this patch only benefits people who run initdb.exe without\n> explicitly specifying a locale; probably a good number of real systems\n> in the wild actually use EDB's graphical installer which initialises a\n> cluster and has its own way of choosing the locale, as discussed in\n> Ertan's thread[3]\n>\n> [1] https://www.postgresql.org/message-id/flat/CA%2BhUKGJZskvCh%3DQm75UkHrY6c1QZUuC92Po9rponj1BbLmcMEA%40mail.gmail.com#3a00c08214a4285d2f3c4297b0ac2be2\n> [2] https://github.com/postgres/postgres/commit/cff4e5a3\n> [3] https://www.postgresql.org/message-id/flat/CAH2i4ydECHZPxEBB7gtRG3vROv7a0d3tqAFXzcJWQ9hRsc1znQ%40mail.gmail.com\n\n\nI have an environment I can use for testing. But what exactly am I \ntesting? :-) Install a few \"problem\" language/region settings, switch \nthe system and ensure initdb runs ok?\n\nOther than Turkish, which locales should I install?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 22 Jul 2024 09:44:44 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net>, 22 Tem 2024 Pzt, 16:44 tarihinde şunu\nyazdı:\n\n> I have an environment I can use for testing. But what exactly am I\n> testing? :-) Install a few \"problem\" language/region settings, switch\n> the system and ensure initdb runs ok?\n>\n> Other than Turkish, which locales should I install?\n>\n\nThomas earlier listed a few:\n\"From a quick search of other recent cases: Czech Republic -> Czechia,\nSwaziland -> Eswatini, Cape Verde -> Cabo Verde, and more, plus others\nthat we have older records of in the mailing list that seemed to\nchange in some minor technical way: Macau, Hong Hong, Norwegian etc.\"\n\nI am not sure if all needs testing though.\n\nThanks & Regards,\nErtan\n\nAndrew Dunstan <andrew@dunslane.net>, 22 Tem 2024 Pzt, 16:44 tarihinde şunu yazdı:I have an environment I can use for testing. But what exactly am I \ntesting? :-) Install a few \"problem\" language/region settings, switch \nthe system and ensure initdb runs ok?\n\nOther than Turkish, which locales should I install?Thomas earlier listed a few:\"From a quick search of other recent cases: Czech Republic -> Czechia,Swaziland -> Eswatini, Cape Verde -> Cabo Verde, and more, plus othersthat we have older records of in the mailing list that seemed tochange in some minor technical way: Macau, Hong Hong, Norwegian etc.\"I am not sure if all needs testing though.Thanks & Regards,Ertan", "msg_date": "Mon, 22 Jul 2024 16:51:27 +0300", "msg_from": "=?UTF-8?B?RXJ0YW4gS8O8w6fDvGtvZ2x1?= <ertan.kucukoglu@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "On Tue, Jul 23, 2024 at 1:44 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> I have an environment I can use for testing. But what exactly am I\n> testing? :-) Install a few \"problem\" language/region settings, switch\n> the system and ensure initdb runs ok?\n\nI just want to know about any weird unexpected consequences of using\nBCP47 locale names, before we change the default in v18. The only\nconcrete thing I found so far was that MinGW didn't like it, but I\nprovided a fix for that. It'd still be possible to initialise a new\ncluster with the old style names if you really want to, but you'd have\nto pass it in explicitly; I was wondering if that could be necessary\nin some pg_upgrade scenario but I guess not, it just clobbers\ntemplate0's pg_database row with values from the source database, and\nrecreates everything else so I think it should be fine (?). I am a\nlittle uneasy about the new names not having .encoding but there\ndoesn't seem to be an issue with that (such locales exist on Unix\ntoo), and the OS still knows which encoding they use in that case.\n\n\n", "msg_date": "Tue, 23 Jul 2024 11:19:57 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "On Tue, Jul 23, 2024 at 11:19 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, Jul 23, 2024 at 1:44 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> > I have an environment I can use for testing. But what exactly am I\n> > testing? :-) Install a few \"problem\" language/region settings, switch\n> > the system and ensure initdb runs ok?\n\nI thought a bit more about what to do with the messy .UTF-8 situation\non Windows, and I think I might see a way forward that harmonises the\ncode and behaviour with Unix, and deletes a lot of special case code.\nBut it's only theories + CI so far.\n\n0001, 0002: As before, teach initdb.exe to choose eg \"en-US\" by default.\n\n0003: Force people to choose locales that match the database\nencoding, as we do on Unix. That is, forbid contradictory\ncombinations like --locale=\"English_United States.1252\"\n--encoding=UTF8, which are currently allowed (and the world is full of\nsuch database clusters because that is how the EDB installer GUI makes\nthem). The only allowed combinations for American English should now\nbe: --locale=\"en-US\" --encoding=\"WIN1252\", and --locale=\"en-US.UTF-8\"\n--encoding=\"UTF8\". You can still use the old names if you like, by\nexplicitly writing --locale=\"English_United States.1252\", but the\nencoding then has to be WIN1252. It's crazy to mix them up, let's ban\nthat.\n\nObviously there is a pg_upgrade case to worry about there. We'd have\nto \"fix\" the now illegal combinations, and I don't know exactly how\nyet.\n\n0004: Rip out the code that does extra wchar_t conversations for\ncollations. If I've understood correctly, we don't need them: if you\nhave a .UTF-8 locale then your encoding is UTF-8 and should be able to\nuse strcoll_l() directly. Right?\n\n0005: Something similar was being done for strftime(). And we might\nas well use strftime_l() instead while we're here (part of general\nmovement to use _l functions and stop splattering setlocale() all over\nthe place, for the multithreaded future).\n\nThese patches pass on CI. Do they give the expected results when used\non a real Windows system?\n\nThere are a few more places where we do wchar_t conversions that could\nprobably be stripped out too, if my assumptions are correct, and we\ncould dig further if the basic idea can be validated and people think\nthis is going in a good direction.", "msg_date": "Wed, 7 Aug 2024 16:15:30 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": ">\n> I already installed Visual Studio 2022 with C++ support as suggested in\n> https://www.postgresql.org/docs/current/install-windows-full.html\n> I cloned codes in the system.\n> But, I cannot find any \"src/tools/msvc\" directory. It is missing.\n> Document states I need everything in there\n> \"The tools for building using Visual C++ or Platform SDK are in the\n> src\\tools\\msvc directory.\"\n> It seems I will need help setting up the build environment.\n>\n\nI am willing to be a tester for Windows given I could get help setting\nup the build environment.\nIt also feels documentation needs some update as I failed to find necessary\nfiles.\n\nThanks & Regards,\nErtan\n\nI already installed Visual Studio 2022 with C++ support as suggested in https://www.postgresql.org/docs/current/install-windows-full.htmlI cloned codes in the system.But, I cannot find any \"src/tools/msvc\" directory. It is missing.Document states I need everything in there\"The tools for building using Visual C++ or Platform SDK are in the src\\tools\\msvc directory.\"It seems I will need help setting up the build environment.I am willing to be a tester for Windows given I could get help setting up the build environment.It also feels documentation needs some update as I failed to find necessary files.Thanks & Regards,Ertan", "msg_date": "Thu, 8 Aug 2024 11:08:55 +0300", "msg_from": "=?UTF-8?B?RXJ0YW4gS8O8w6fDvGtvZ2x1?= <ertan.kucukoglu@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Windows default locale vs initdb" }, { "msg_contents": "On 2024-08-08 Th 4:08 AM, Ertan Küçükoglu wrote:\n>\n> I already installed Visual Studio 2022 with C++ support as\n> suggested in\n> https://www.postgresql.org/docs/current/install-windows-full.html\n> I cloned codes in the system.\n> But, I cannot find any \"src/tools/msvc\" directory. It is missing.\n> Document states I need everything in there\n> \"The tools for building using Visual C++ or Platform SDK are in\n> the src\\tools\\msvc directory.\"\n> It seems I will need help setting up the build environment.\n>\n>\n> I am willing to be a tester for Windows given I could get help setting \n> up the build environment.\n> It also feels documentation needs some update as I failed to find \n> necessary files.\n\n\nIf you're trying to build the master branch those documents no longer \napply. You will need to build using meson, as documented here: \n<https://www.postgresql.org/docs/17/install-meson.html>\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-08-08 Th 4:08 AM, Ertan\n Küçükoglu wrote:\n\n\n\n\n\n\n\n\nI already installed Visual Studio 2022 with C++\n support as suggested in https://www.postgresql.org/docs/current/install-windows-full.html\n\nI cloned codes in the system.\nBut, I cannot find any \"src/tools/msvc\" directory.\n It is missing.\nDocument states I need everything in there\n\"The tools for building using Visual C++ or\n Platform SDK are in the src\\tools\\msvc directory.\"\nIt seems I will need help setting up the build\n environment.\n\n\n\n\n\n\nI am willing to be a tester for Windows given I could get\n help setting up the build environment.\nIt also feels documentation needs some update as I failed\n to find necessary files.\n\n\n\n\n\nIf you're trying to build the master branch those documents no\n longer apply. You will need to build using meson, as documented\n here:\n <https://www.postgresql.org/docs/17/install-meson.html>\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Thu, 8 Aug 2024 07:39:37 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Windows default locale vs initdb" } ]
[ { "msg_contents": "Hi All,\n\nPlease help me out with my doubt in RANGE partition with TEXT datatype:\n\npostgres=# create table tab1 (col1 text) PARTITION BY RANGE (col1);\nCREATE TABLE\n\npostgres=# create table p1 (col1 text);\nCREATE TABLE\n\n-- Partition with range from '5' to '10' shows error:\npostgres=# alter table tab1 attach partition p1 for values from ('5') to\n('10');\nERROR: empty range bound specified for partition \"p1\"\nLINE 1: ...r table tab1 attach partition p1 for values from ('5') to ('...\n ^\nDETAIL: Specified lower bound ('5') is greater than or equal to upper\nbound ('10').\n\n-- Whereas, partition with range from '5' to '9' is working fine as below:\npostgres=# alter table tab1 attach partition p1 for values from ('5') to\n('9');\nALTER TABLE\n\nIf this behavior is expected, Kindly let me know, how to represent the\nrange from '5' to '10' with text datatype column?\nIs there any specific restriction for RANGE PARTITION table with TEXT\ndatatype column?\n\nSimilar test scenario is working fine with INTEGER datatype.\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nHi All,Please help me out with my doubt in RANGE partition with TEXT datatype:postgres=# create table tab1 (col1 text) PARTITION BY RANGE (col1);CREATE TABLEpostgres=# create table p1 (col1 text);CREATE TABLE-- Partition with range from '5' to '10' shows error:postgres=# alter table tab1 attach partition p1 for values from ('5') to ('10');ERROR:  empty range bound specified for partition \"p1\"LINE 1: ...r table tab1 attach partition p1 for values from ('5') to ('...                                                             ^DETAIL:  Specified lower bound ('5') is greater than or equal to upper bound ('10').-- Whereas, partition with range from '5' to '9' is working fine as below:postgres=# alter table tab1 attach partition p1 for values from ('5') to ('9');ALTER TABLEIf this behavior is expected, Kindly let me know, how to represent the range from '5' to '10' with text datatype column?Is there any specific restriction for RANGE PARTITION table with TEXT datatype column?Similar test scenario is working fine with INTEGER datatype.-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 19 Apr 2021 13:43:17 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Doubt with [ RANGE partition with TEXT datatype ]" }, { "msg_contents": "Hi Prabhat,\n\nOn Mon, Apr 19, 2021 at 5:13 PM Prabhat Sahu\n<prabhat.sahu@enterprisedb.com> wrote:\n>\n> Hi All,\n>\n> Please help me out with my doubt in RANGE partition with TEXT datatype:\n>\n> postgres=# create table tab1 (col1 text) PARTITION BY RANGE (col1);\n> CREATE TABLE\n>\n> postgres=# create table p1 (col1 text);\n> CREATE TABLE\n>\n> -- Partition with range from '5' to '10' shows error:\n> postgres=# alter table tab1 attach partition p1 for values from ('5') to ('10');\n> ERROR: empty range bound specified for partition \"p1\"\n> LINE 1: ...r table tab1 attach partition p1 for values from ('5') to ('...\n> ^\n> DETAIL: Specified lower bound ('5') is greater than or equal to upper bound ('10').\n>\n> -- Whereas, partition with range from '5' to '9' is working fine as below:\n> postgres=# alter table tab1 attach partition p1 for values from ('5') to ('9');\n> ALTER TABLE\n\nWell, that is how comparing text values works. If you are expecting\nthe comparisons to follow numerical rules, use a numeric data type.\n\n> If this behavior is expected, Kindly let me know, how to represent the range from '5' to '10' with text datatype column?\n\nDon't know why you want to use the text type for the column and these\nparticular values for the partitions bounds, but one workaround would\nbe to use '05' instead of '5'.\n\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 19 Apr 2021 17:46:45 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Doubt with [ RANGE partition with TEXT datatype ]" }, { "msg_contents": "On Mon, Apr 19, 2021 at 2:16 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> Hi Prabhat,\n>\n> On Mon, Apr 19, 2021 at 5:13 PM Prabhat Sahu\n> <prabhat.sahu@enterprisedb.com> wrote:\n> >\n> > Hi All,\n> >\n> > Please help me out with my doubt in RANGE partition with TEXT datatype:\n> >\n> > postgres=# create table tab1 (col1 text) PARTITION BY RANGE (col1);\n> > CREATE TABLE\n> >\n> > postgres=# create table p1 (col1 text);\n> > CREATE TABLE\n> >\n> > -- Partition with range from '5' to '10' shows error:\n> > postgres=# alter table tab1 attach partition p1 for values from ('5') to\n> ('10');\n> > ERROR: empty range bound specified for partition \"p1\"\n> > LINE 1: ...r table tab1 attach partition p1 for values from ('5') to\n> ('...\n> > ^\n> > DETAIL: Specified lower bound ('5') is greater than or equal to upper\n> bound ('10').\n> >\n> > -- Whereas, partition with range from '5' to '9' is working fine as\n> below:\n> > postgres=# alter table tab1 attach partition p1 for values from ('5') to\n> ('9');\n> > ALTER TABLE\n>\n> Well, that is how comparing text values works. If you are expecting\n> the comparisons to follow numerical rules, use a numeric data type.\n>\n> > If this behavior is expected, Kindly let me know, how to represent the\n> range from '5' to '10' with text datatype column?\n>\n> Don't know why you want to use the text type for the column and these\n> particular values for the partitions bounds, but one workaround would\n> be to use '05' instead of '5'.\n>\n\nWhile testing on some PG behavior, I came across such a scenario/doubt.\nThank you Amit for the clarification.\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, Apr 19, 2021 at 2:16 PM Amit Langote <amitlangote09@gmail.com> wrote:Hi Prabhat,\n\nOn Mon, Apr 19, 2021 at 5:13 PM Prabhat Sahu\n<prabhat.sahu@enterprisedb.com> wrote:\n>\n> Hi All,\n>\n> Please help me out with my doubt in RANGE partition with TEXT datatype:\n>\n> postgres=# create table tab1 (col1 text) PARTITION BY RANGE (col1);\n> CREATE TABLE\n>\n> postgres=# create table p1 (col1 text);\n> CREATE TABLE\n>\n> -- Partition with range from '5' to '10' shows error:\n> postgres=# alter table tab1 attach partition p1 for values from ('5') to ('10');\n> ERROR:  empty range bound specified for partition \"p1\"\n> LINE 1: ...r table tab1 attach partition p1 for values from ('5') to ('...\n>                                                              ^\n> DETAIL:  Specified lower bound ('5') is greater than or equal to upper bound ('10').\n>\n> -- Whereas, partition with range from '5' to '9' is working fine as below:\n> postgres=# alter table tab1 attach partition p1 for values from ('5') to ('9');\n> ALTER TABLE\n\nWell, that is how comparing text values works.  If you are expecting\nthe comparisons to follow numerical rules, use a numeric data type.\n\n> If this behavior is expected, Kindly let me know, how to represent the range from '5' to '10' with text datatype column?\n\nDon't know why you want to use the text type for the column and these\nparticular values for the partitions bounds, but one workaround would\nbe to use '05' instead of '5'.While testing on some PG behavior, I came across such a scenario/doubt.Thank you Amit for the clarification. -- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 19 Apr 2021 14:41:55 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Doubt with [ RANGE partition with TEXT datatype ]" } ]
[ { "msg_contents": "hi, all\n\n Recently, I found the copyright info for PG11 branch still is \"Portions\nCopyright (c) *1996-2018*, PostgreSQL Global Development Group\". Do we need\nto update it?\n\n regards, ChenBo\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n", "msg_date": "Mon, 19 Apr 2021 05:43:58 -0700 (MST)", "msg_from": "bchen90 <bchen90@163.com>", "msg_from_op": true, "msg_subject": "Do we need to update copyright for PG11 branch" }, { "msg_contents": "bchen90 <bchen90@163.com> writes:\n> Recently, I found the copyright info for PG11 branch still is \"Portions\n> Copyright (c) *1996-2018*, PostgreSQL Global Development Group\". Do we need\n> to update it?\n\nNo, that's not our practice.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Apr 2021 09:20:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Do we need to update copyright for PG11 branch" }, { "msg_contents": "On Mon, Apr 19, 2021 at 09:20:22AM -0400, Tom Lane wrote:\n> bchen90 <bchen90@163.com> writes:\n> > Recently, I found the copyright info for PG11 branch still is \"Portions\n> > Copyright (c) *1996-2018*, PostgreSQL Global Development Group\". Do we need\n> > to update it?\n> \n> No, that's not our practice.\n\nWe technically only update in back branches:\n\n\t./doc/src/sgml/legal.sgml in head and back branches\n\t./COPYRIGHT in back branches\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 19 Apr 2021 20:27:53 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Do we need to update copyright for PG11 branch" } ]
[ { "msg_contents": "Hi.\n\nWith the commit mentioned in the $subject, I am seeing the\nchange in behaviour with the varlena header size. Please\nconsider the below test:\n\npostgres@83795=#CREATE TABLE test_storage_char(d char(20));\nCREATE TABLE\npostgres@83795=#INSERT INTO test_storage_char SELECT REPEAT('e', 20);\nINSERT 0 1\npostgres@83795=#SELECT d, pg_column_size(d) FROM test_storage_char;\n d | pg_column_size\n----------------------+----------------\n eeeeeeeeeeeeeeeeeeee | 21\n(1 row)\n\npostgres@83795=#ALTER TABLE test_storage_char ALTER COLUMN d SET STORAGE\nPLAIN;\nALTER TABLE\npostgres@83795=#SELECT d, pg_column_size(d) FROM test_storage_char;\n d | pg_column_size\n----------------------+----------------\n eeeeeeeeeeeeeeeeeeee | 21\n(1 row)\n\npostgres@83795=#UPDATE test_storage_char SET d='ab' WHERE d LIKE '%e%';\nUPDATE 1\npostgres@83795=#SELECT d, pg_column_size(d) FROM test_storage_char;\n d | pg_column_size\n----------------------+----------------\n ab | 24\n(1 row)\n\nAfter changing the STORAGE for the column and UPDATE, pg_column_size\nnow returns the size as 24.\n\n*BEFORE Commit 86dc90056:*\n\npostgres@129158=#SELECT d, pg_column_size(d) FROM test_storage_char;\n d | pg_column_size\n----------------------+----------------\n ab | 21\n(1 row)\n\nI am not sure whether this change is expected? Or missing something\nin the toasting the attribute?\n\n\nThanks,\nRushabh Lathia\nwww.EnterpriseDB.com\n\nHi.With the commit mentioned in the $subject, I am seeing thechange in behaviour with the varlena header size.  Pleaseconsider the below test:postgres@83795=#CREATE TABLE test_storage_char(d char(20));CREATE TABLEpostgres@83795=#INSERT INTO test_storage_char SELECT REPEAT('e', 20);INSERT 0 1postgres@83795=#SELECT d, pg_column_size(d) FROM test_storage_char;          d           | pg_column_size ----------------------+---------------- eeeeeeeeeeeeeeeeeeee |             21(1 row)postgres@83795=#ALTER TABLE test_storage_char ALTER COLUMN  d SET STORAGE PLAIN;ALTER TABLEpostgres@83795=#SELECT d, pg_column_size(d) FROM test_storage_char;          d           | pg_column_size ----------------------+---------------- eeeeeeeeeeeeeeeeeeee |             21(1 row)postgres@83795=#UPDATE test_storage_char SET d='ab' WHERE d LIKE '%e%';UPDATE 1postgres@83795=#SELECT d, pg_column_size(d) FROM test_storage_char;          d           | pg_column_size ----------------------+---------------- ab                   |             24(1 row)After changing the STORAGE for the column and UPDATE, pg_column_size now returns the size as 24. BEFORE Commit 86dc90056:postgres@129158=#SELECT d, pg_column_size(d) FROM test_storage_char;          d           | pg_column_size ----------------------+---------------- ab                   |             21(1 row)I am not sure whether this change is expected? Or missing somethingin the toasting the attribute?Thanks,Rushabh Lathiawww.EnterpriseDB.com", "msg_date": "Mon, 19 Apr 2021 18:30:06 +0530", "msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>", "msg_from_op": true, "msg_subject": "Commit 86dc90056 - Rework planning and execution of UPDATE and DELETE" }, { "msg_contents": "Rushabh Lathia <rushabh.lathia@gmail.com> writes:\n> With the commit mentioned in the $subject, I am seeing the\n> change in behaviour with the varlena header size.\n\nInteresting. AFAICS, the new behavior is correct and the old is wrong.\nSET STORAGE PLAIN is supposed to disable use of TOAST features, including\nshort varlena headers. So now that's being honored by the UPDATE, but\nbefore it was not. I have no idea exactly why that changed though ---\nI'd expect that to be implemented in low-level tuple-construction logic\nthat the planner rewrite wouldn't have changed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Apr 2021 10:34:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Commit 86dc90056 - Rework planning and execution of UPDATE and\n DELETE" }, { "msg_contents": "On Mon, Apr 19, 2021 at 10:00 PM Rushabh Lathia\n<rushabh.lathia@gmail.com> wrote:\n>\n> Hi.\n>\n> With the commit mentioned in the $subject, I am seeing the\n> change in behaviour with the varlena header size. Please\n> consider the below test:\n>\n> postgres@83795=#CREATE TABLE test_storage_char(d char(20));\n> CREATE TABLE\n> postgres@83795=#INSERT INTO test_storage_char SELECT REPEAT('e', 20);\n> INSERT 0 1\n> postgres@83795=#SELECT d, pg_column_size(d) FROM test_storage_char;\n> d | pg_column_size\n> ----------------------+----------------\n> eeeeeeeeeeeeeeeeeeee | 21\n> (1 row)\n>\n> postgres@83795=#ALTER TABLE test_storage_char ALTER COLUMN d SET STORAGE PLAIN;\n> ALTER TABLE\n> postgres@83795=#SELECT d, pg_column_size(d) FROM test_storage_char;\n> d | pg_column_size\n> ----------------------+----------------\n> eeeeeeeeeeeeeeeeeeee | 21\n> (1 row)\n>\n> postgres@83795=#UPDATE test_storage_char SET d='ab' WHERE d LIKE '%e%';\n> UPDATE 1\n> postgres@83795=#SELECT d, pg_column_size(d) FROM test_storage_char;\n> d | pg_column_size\n> ----------------------+----------------\n> ab | 24\n> (1 row)\n>\n> After changing the STORAGE for the column and UPDATE, pg_column_size\n> now returns the size as 24.\n>\n> BEFORE Commit 86dc90056:\n>\n> postgres@129158=#SELECT d, pg_column_size(d) FROM test_storage_char;\n> d | pg_column_size\n> ----------------------+----------------\n> ab | 21\n> (1 row)\n>\n> I am not sure whether this change is expected? Or missing something\n> in the toasting the attribute?\n\nI haven't studied this closely enough yet to say if the new behavior\nis correct or not, but can say why this has changed.\n\nBefore 86dc90056, the new tuple to pass to ExecUpdate would be\ncomputed with a TupleDesc that uses pg_type.typstorage for the column\ninstead of the column's actual pg_attribute.attstorage. That's\nbecause the new tuple would be computed from the subplan's targetlist\nand the TupleDesc for that targetlist is computed with no regard to\nwhere the computed tuple will go; IOW ignoring the target table's\nactual TupleDesc.\n\nAfter 86dc90056, the new tuple is computed with the target table's\nactual TupleDesc, so the new value respects the column's attstorage,\nwhich makes me think the new behavior is not wrong.\n\nWill look more closely tomorrow.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 19 Apr 2021 23:34:27 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Commit 86dc90056 - Rework planning and execution of UPDATE and\n DELETE" }, { "msg_contents": "I wrote:\n> Rushabh Lathia <rushabh.lathia@gmail.com> writes:\n>> With the commit mentioned in the $subject, I am seeing the\n>> change in behaviour with the varlena header size.\n\n> Interesting. AFAICS, the new behavior is correct and the old is wrong.\n> SET STORAGE PLAIN is supposed to disable use of TOAST features, including\n> short varlena headers. So now that's being honored by the UPDATE, but\n> before it was not. I have no idea exactly why that changed though ---\n> I'd expect that to be implemented in low-level tuple-construction logic\n> that the planner rewrite wouldn't have changed.\n\nOh, after a bit of tracing I see it. In v13, the new value gets\nshort-header-ified when a tuple is constructed here:\n\n /*\n * Ensure input tuple is the right format for the target relation.\n */\n if (node->mt_scans[node->mt_whichplan]->tts_ops != planSlot->tts_ops)\n {\n ExecCopySlot(node->mt_scans[node->mt_whichplan], planSlot);\n planSlot = node->mt_scans[node->mt_whichplan];\n }\n\nwhere the target slot has been made like this:\n\n mtstate->mt_scans[i] =\n ExecInitExtraTupleSlot(mtstate->ps.state, ExecGetResultType(mtstate->mt_plans[i]),\n table_slot_callbacks(resultRelInfo->ri_RelationDesc));\n\nSo that's using a tupdesc that's been constructed according to the\ndefault properties of the column datatypes, in particular attstorage\nwill be 'x' for the 'd' column. Later we transpose the data into\na slot that actually has the target table's rowtype, but the damage\nis already done; the value isn't un-short-headerized at that point.\n(I wonder if that should be considered a bug?)\n\n86dc90056 got rid of the intermediate mt_scans slots, so the 'ab'\nvalue only gets put into a slot that has the table's real descriptor,\nand it never loses its original 4-byte header.\n\nI observe that the INSERT code path still does the wrong thing:\n\nregression=# insert into test_storage_char values('foo');\nINSERT 0 1\nregression=# SELECT d, pg_column_size(d) FROM test_storage_char;\n d | pg_column_size \n----------------------+----------------\n ab | 24\n foo | 21\n(2 rows)\n\nMaybe we oughta try to fix that sometime. It doesn't seem terribly\nhigh-priority though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Apr 2021 11:07:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Commit 86dc90056 - Rework planning and execution of UPDATE and\n DELETE" }, { "msg_contents": "On Mon, Apr 19, 2021 at 10:34 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> After 86dc90056, the new tuple is computed with the target table's\n> actual TupleDesc, so the new value respects the column's attstorage,\n> which makes me think the new behavior is not wrong.\n\nI would not have expected SET STORAGE PLAIN to disable the use of\nshort varlena headers. *Maybe* at some point in time there was enough\ncode that couldn't operate directly on short varlenas to justify a\ntheory that in some circumstances eschewing short headers would save\non CPU cycles. But surely in 2021 this is not true and this behavior\nis not plausibly desired by anyone.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 19 Apr 2021 11:59:00 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Commit 86dc90056 - Rework planning and execution of UPDATE and\n DELETE" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Apr 19, 2021 at 10:34 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>> After 86dc90056, the new tuple is computed with the target table's\n>> actual TupleDesc, so the new value respects the column's attstorage,\n>> which makes me think the new behavior is not wrong.\n\n> I would not have expected SET STORAGE PLAIN to disable the use of\n> short varlena headers.\n\nAu contraire. The reason that mode exists at all (for varlena types)\nis to support data types that haven't been updated for TOAST. Perhaps\nthat's now the empty set, but it's not really our job to take away the\ncapability. If you really want MAIN you should say that, not quibble\nthat PLAIN doesn't mean what it's always been understood to mean.\n\nI don't think that this behavior quite breaks such data types, because\nif you actually have a type like that then you've set typstorage = PLAIN\nand we will not allow there to be any tupdescs in the system that differ\nfrom that. The issue is just that if you set a particular column of\nan otherwise-toastable type to be PLAIN then we're not terribly rigorous\nabout enforcing that, because values that have been toasted can get into\nthe column without being fully detoasted. (I've not checked, but I\nsuspect that you could also get a compressed or toasted-out-of-line value\ninto such a column if you tried hard enough.)\n\nRelated to this is that when you update some other column(s) of the table,\nwe don't try to force detoasting of existing values in a column recently\nset to PLAIN. Personally I think that's fine, so it means that the\nlack of rigor is inherent.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Apr 2021 12:13:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Commit 86dc90056 - Rework planning and execution of UPDATE and\n DELETE" }, { "msg_contents": "On Mon, Apr 19, 2021 at 12:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Au contraire. The reason that mode exists at all (for varlena types)\n> is to support data types that haven't been updated for TOAST. Perhaps\n> that's now the empty set, but it's not really our job to take away the\n> capability. If you really want MAIN you should say that, not quibble\n> that PLAIN doesn't mean what it's always been understood to mean.\n\nThis kind of begs the question of whether you have the right idea\nabout what PLAIN has always been understood to mean, and whether\neveryone understands it the same way. I formed my understanding of\nwhat PLAIN is understood to mean by reading the ALTER TABLE .. SET\nSTORAGE documentation, and there's no real hint in there that this is\nsome kind of backward-compatibility only feature. Rather, I read that\nparagraph to suggest that you can choose between the four options as a\nway of getting best performance. Both external storage and compression\nare trade-offs: they make the tuples smaller, which can be good for\nperformance, but they also make the toasted columns more expensive to\naccess, which can be bad for performance. It seems completely\nreasonable to suppose that some workloads may benefit from a\nnon-default TOAST strategy; e.g. if you often access only a few\ncolumns but scan a lot of rows, toasting wins; if you typically access\nevery column but only a few rows via an index scan, not toasting wins.\nAnd if that is your idea about what the feature does - an idea that\nseems perfectly defensible given what the documentation says about\nthis - then I think it's going to be surprising to find that it also\ninhibits 1-byte headers from being used. But, IDK, maybe nobody will\ncare (or maybe I'm the only one who will be surprised).\n\nPerhaps this whole area needs a broader rethink at some point. I'm\ninclined to think that compatibility with varlena data types that\nhaven't been updated since PostgreSQL 8.3 came out is almost a\nnon-goal and maybe we ought to just kick such data types and the\nassociated code paths to the curb. It's unlikely that they get much\ntesting. On the other hand, perhaps we'd like to have more control\nover the decision to compress or store externally than we have\ncurrently. I think if I were designing this from scratch, I'd want one\nswitch for whether it's OK to compress, with values meaning \"yes,\"\n\"no,\" and \"only if stored externally,\" a second switch for the\n*length* at which external storage should be used (so that I can push\nout rarely-used columns at lower size thresholds and commonly-used\nones at higher thresholds), and a third for what should happen if we\ndo the stuff allowed by the first two switches and the tuple still\ndoesn't fit, with value meaning \"fail\" and \"externalize anyway\".\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 19 Apr 2021 12:46:19 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Commit 86dc90056 - Rework planning and execution of UPDATE and\n DELETE" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Apr 19, 2021 at 12:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Au contraire. The reason that mode exists at all (for varlena types)\n>> is to support data types that haven't been updated for TOAST.\n\n> This kind of begs the question of whether you have the right idea\n> about what PLAIN has always been understood to mean, and whether\n> everyone understands it the same way. I formed my understanding of\n> what PLAIN is understood to mean by reading the ALTER TABLE .. SET\n> STORAGE documentation, and there's no real hint in there that this is\n> some kind of backward-compatibility only feature.\n\nThat doco is explaining the users-eye view of it. Places addressed\nto datatype developers, such as the CREATE TYPE reference page, see\nit a bit differently. CREATE TYPE for instance points out that\n\n All storage values other than plain imply that the functions of the\n data type can handle values that have been toasted, as described in ...\n\n> I think if I were designing this from scratch, I'd want one\n> switch for whether it's OK to compress, with values meaning \"yes,\"\n> \"no,\" and \"only if stored externally,\" a second switch for the\n> *length* at which external storage should be used (so that I can push\n> out rarely-used columns at lower size thresholds and commonly-used\n> ones at higher thresholds), and a third for what should happen if we\n> do the stuff allowed by the first two switches and the tuple still\n> doesn't fit, with value meaning \"fail\" and \"externalize anyway\".\n\nYeah, I don't think the existing options for attstorage have much\nto recommend them except backwards compatibility. But if we do\nredesign them, I'd still say there should be a way for a data\ntype to say that it doesn't support these weird header hacks that\nwe've invented. The notion that short header doesn't cost anything\nseems extremely Intel-centric to me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Apr 2021 13:03:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Commit 86dc90056 - Rework planning and execution of UPDATE and\n DELETE" }, { "msg_contents": "On Mon, Apr 19, 2021 at 1:03 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> That doco is explaining the users-eye view of it. Places addressed\n> to datatype developers, such as the CREATE TYPE reference page, see\n> it a bit differently. CREATE TYPE for instance points out that\n>\n> All storage values other than plain imply that the functions of the\n> data type can handle values that have been toasted, as described in ...\n\nInteresting. It feels to me like SET STORAGE PLAIN feels like it is\nreally trying to be two different things. Either you want to inhibit\ncompression and external storage for performance reasons, or your data\ntype can't support either one. Maybe we should separate those\nconcepts, since there's no mode right now that says \"don't ever\ncompress, and externalize only if there's absolutely no other way,\"\nand there's no way to disable compression and externalization without\nalso killing off short headers. :-(\n\n> The notion that short header doesn't cost anything seems extremely Intel-centric to me.\n\nI don't think so. It's true that Intel is very forgiving about\nunaligned accesses compared to some other architectures, but I think\nif you have a terabyte of data, you want it to fit into as few disk\npages as possible pretty much no matter what architecture you're\nusing. The dominant costs are going to be the I/O costs, not the CPU\ncosts of dealing with unaligned bytes. In fact, even if you have a\ngigabyte of data, I bet it's *still* better to use a more compact\non-disk representation. Now, the dominant cost is going to be pumping\nthe data through the L3 CPU cache, which is still - I think - going to\nbe quite a lot more important than the CPU costs of dealing with\nunaligned bytes. The CPU bus is an I/O bottleneck not unlike the disk\nitself, just at a higher rate of speed which is still way slower than\nthe CPU speed. Now if you have a megabyte of data, or better yet a\nkilobyte of data, then I think optimizing for CPU efficiency may well\nbe the right thing to do. I don't know how much 4-byte varlena headers\nreally save there, but if I were designing a storage representation\nfor very small data sets, I'd definitely be thinking about how I could\nwaste space to shave cycles.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 19 Apr 2021 13:22:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Commit 86dc90056 - Rework planning and execution of UPDATE and\n DELETE" } ]
[ { "msg_contents": "Hello all,\nI would like to know your opinion on the following behaviour I see for \nPostgreSQL setup with synchronous replication.\n\nThis behaviour happens in a special use case. In this use case, there \nare 2 synchronous replicas with the following config (truncated):\n\n- 2 nodes\n- synchronous_standby_names='*'\n- synchronous_commit=remote_apply\n\n\nWith this setup run the following steps (LAN down - LAN between master \nand replica):\n-----------------\npostgres=# truncate table a;\nTRUNCATE TABLE\npostgres=# insert into a values (1); -- LAN up, insert has been applied \nto replica.\nINSERT 0 1\nVypnu LAN na serveru se standby:\npostgres=# insert into a values (2); --LAN down, waiting for a \nconfirmation from sync replica. In this situation cancel it (press CTRL+C)\n^CCancel request sent\n*WARNING:  canceling wait for synchronous replication due to user request\nDETAIL:  The transaction has already committed locally, but might not \nhave been replicated to the standby.\n*INSERT 0 1\nThere will be warning that commit was performed only locally:\n2021-04-12 19:55:53.063 CEST [26104] WARNING:  canceling wait for \nsynchronous replication due to user request\n2021-04-12 19:55:53.063 CEST [26104] DETAIL:  The transaction has \nalready committed locally, but might not have been replicated to the \nstandby.\n\npostgres=# insert into a values (2); --LAN down, waiting for a \nconfirmation from sync replica. In this situation cancel it (press CTRL+C)\n^CCancel request sent\nWARNING:  canceling wait for synchronous replication due to user request\nDETAIL:  The transaction has already committed locally, but might not \nhave been replicated to the standby.\nINSERT 0 1\npostgres=# insert into a values (2); --LAN down, waiting for sync \nreplica, second attempt, cancel it as well (CTRL+C)\n^CCancel request sent\nWARNING:  canceling wait for synchronous replication due to user request\nDETAIL:  The transaction has already committed locally, but might not \nhave been replicated to the standby.\nINSERT 0 1\npostgres=# update a set n=3 where n=2; --LAN down, waiting for sync \nreplica, cancel it (CTRL+C)\n^CCancel request sent\nWARNING:  canceling wait for synchronous replication due to user request\nDETAIL:  The transaction has already committed locally, but might not \nhave been replicated to the standby.\nUPDATE 2\npostgres=# update a set n=3 where n=2; -- run the same update,because \ndata from the previous attempt was commited on master, it is sucessfull, \nbut no changes\nUPDATE 0\npostgres=# select * from a;\n  n\n---\n  1\n  3\n  3\n(3 rows)\npostgres=#\n------------------------\n\nNow, there is only value 1 in the sync replica table (no other values), \ndata is not in sync. This is expected, after the LAN restore, data will \ncome sync again, but if the main/primary node will fail and we failover \nto replica before the LAN is back up or the storage for this node would \nbe destroyed and data would not sync to replica before it, we will lose \ndata even if the client received successful commit (with a warning).\n From the synchronous_commit=remote_write level and \"higher\", I would \nexpect, that when the remote application (doesn't matter if flush, write \nor apply) would not be applied I would not receive a confirmation about \nthe commit (even with a warning). Something like, if there is no commit \nfrom sync replica, there is no commit on primary and if someone performs \nthe steps above, the whole transaction will not send a confirmation.\n\nThis can cause issues if the application receives a confirmation about \nthe success and performs some follow-up steps e.g. create a user account \nand sends a request to the mail system to create an account or create a \nVPN account. If the scenario above happens, there can exist a VPN \naccount that does not have any presence in the central database and can \nbe a security issue.\n\nI hope I explained it sufficiently. :-)\n\nDo you think, that would be possible to implement a process that would \nsolve this use case?\n\nThank you\nOndrej\n\n\n\n\n\n\n\n Hello all,\n I would like to know your opinion on the following behaviour I see\n for PostgreSQL setup with synchronous replication.\nThis behaviour happens in a special use case. In this use case,\n there are 2 synchronous replicas with the following config\n (truncated):\n- 2 nodes\n - synchronous_standby_names='*'\n - synchronous_commit=remote_apply\n\n With this setup run the following steps (LAN down - LAN between\n master and replica):\n -----------------\n postgres=# truncate table a;\n TRUNCATE TABLE\n postgres=# insert into a values (1); -- LAN up, insert has been\n applied to replica.\n INSERT 0 1\n Vypnu LAN na serveru se standby:\n postgres=# insert into a values (2); --LAN down, waiting for a\n confirmation from sync replica. In this situation cancel it (press\n CTRL+C)\n ^CCancel request sent\nWARNING:  canceling wait for synchronous replication due to user\n request\n DETAIL:  The transaction has already committed locally, but might\n not have been replicated to the standby.\nINSERT 0 1\n There will be warning that commit was performed only locally:\n 2021-04-12 19:55:53.063 CEST [26104] WARNING:  canceling wait for\n synchronous replication due to user request\n 2021-04-12 19:55:53.063 CEST [26104] DETAIL:  The transaction has\n already committed locally, but might not have been replicated to the\n standby.\n  \n postgres=# insert into a values (2); --LAN down, waiting for a\n confirmation from sync replica. In this situation cancel it (press\n CTRL+C)\n ^CCancel request sent\n WARNING:  canceling wait for synchronous replication due to user\n request\n DETAIL:  The transaction has already committed locally, but might\n not have been replicated to the standby.\n INSERT 0 1\n postgres=# insert into a values (2); --LAN down, waiting for sync\n replica, second attempt, cancel it as well (CTRL+C)\n ^CCancel request sent\n WARNING:  canceling wait for synchronous replication due to user\n request\n DETAIL:  The transaction has already committed locally, but might\n not have been replicated to the standby.\n INSERT 0 1\n postgres=# update a set n=3 where n=2; --LAN down, waiting for sync\n replica, cancel it (CTRL+C)\n ^CCancel request sent\n WARNING:  canceling wait for synchronous replication due to user\n request\n DETAIL:  The transaction has already committed locally, but might\n not have been replicated to the standby.\n UPDATE 2\n postgres=# update a set n=3 where n=2; -- run the same\n update,because data from the previous attempt was commited on\n master, it is sucessfull, but no changes\n UPDATE 0\n postgres=# select * from a;\n  n\n ---\n  1\n  3\n  3\n (3 rows)\n postgres=#\n ------------------------\n\n Now, there is only value 1 in the sync replica table (no other\n values), data is not in sync. This is expected, after the LAN\n restore, data will come sync again, but if the main/primary node\n will fail and we failover to replica before the LAN is back up or\n the storage for this node would be destroyed and data would not sync\n to replica before it, we will lose data even if the client received\n successful commit (with a warning).\n From the synchronous_commit=remote_write level and \"higher\", I would\n expect, that when the remote application (doesn't matter if flush,\n write or apply) would not be applied I would not receive a\n confirmation about the commit (even with a warning). Something like,\n if there is no commit from sync replica, there is no commit on\n primary and if someone performs the steps above, the whole\n transaction will not send a confirmation.\n\n This can cause issues if the application receives a confirmation\n about the success and performs some follow-up steps e.g. create a\n user account and sends a request to the mail system to create an\n account or create a VPN account. If the scenario above happens,\n there can exist a VPN account that does not have any presence in the\n central database and can be a security issue.\n\n I hope I explained it sufficiently. :-)\n\n Do you think, that would be possible to implement a process that\n would solve this use case?\nThank you\n Ondrej", "msg_date": "Mon, 19 Apr 2021 18:19:37 +0100", "msg_from": "=?UTF-8?B?T25kxZllaiDFvWnFvmth?= <ondrej.zizka@stratox.cz>", "msg_from_op": true, "msg_subject": "Synchronous commit behavior during network outage" }, { "msg_contents": "Hi Ondřej,\n\nThanks for the report. It seems to be a clear violation of what is\npromised in the docs. Although it's unlikely that someone implemented\nan application which deals with important data and \"pressed Ctr+C\" as\nit's done in psql. So this might be not such a critical issue after\nall. BTW what version of PostgreSQL are you using?\n\n\nOn Mon, Apr 19, 2021 at 10:13 PM Ondřej Žižka <ondrej.zizka@stratox.cz> wrote:\n>\n> Hello all,\n> I would like to know your opinion on the following behaviour I see for PostgreSQL setup with synchronous replication.\n>\n> This behaviour happens in a special use case. In this use case, there are 2 synchronous replicas with the following config (truncated):\n>\n> - 2 nodes\n> - synchronous_standby_names='*'\n> - synchronous_commit=remote_apply\n>\n>\n> With this setup run the following steps (LAN down - LAN between master and replica):\n> -----------------\n> postgres=# truncate table a;\n> TRUNCATE TABLE\n> postgres=# insert into a values (1); -- LAN up, insert has been applied to replica.\n> INSERT 0 1\n> Vypnu LAN na serveru se standby:\n> postgres=# insert into a values (2); --LAN down, waiting for a confirmation from sync replica. In this situation cancel it (press CTRL+C)\n> ^CCancel request sent\n> WARNING: canceling wait for synchronous replication due to user request\n> DETAIL: The transaction has already committed locally, but might not have been replicated to the standby.\n> INSERT 0 1\n> There will be warning that commit was performed only locally:\n> 2021-04-12 19:55:53.063 CEST [26104] WARNING: canceling wait for synchronous replication due to user request\n> 2021-04-12 19:55:53.063 CEST [26104] DETAIL: The transaction has already committed locally, but might not have been replicated to the standby.\n>\n> postgres=# insert into a values (2); --LAN down, waiting for a confirmation from sync replica. In this situation cancel it (press CTRL+C)\n> ^CCancel request sent\n> WARNING: canceling wait for synchronous replication due to user request\n> DETAIL: The transaction has already committed locally, but might not have been replicated to the standby.\n> INSERT 0 1\n> postgres=# insert into a values (2); --LAN down, waiting for sync replica, second attempt, cancel it as well (CTRL+C)\n> ^CCancel request sent\n> WARNING: canceling wait for synchronous replication due to user request\n> DETAIL: The transaction has already committed locally, but might not have been replicated to the standby.\n> INSERT 0 1\n> postgres=# update a set n=3 where n=2; --LAN down, waiting for sync replica, cancel it (CTRL+C)\n> ^CCancel request sent\n> WARNING: canceling wait for synchronous replication due to user request\n> DETAIL: The transaction has already committed locally, but might not have been replicated to the standby.\n> UPDATE 2\n> postgres=# update a set n=3 where n=2; -- run the same update,because data from the previous attempt was commited on master, it is sucessfull, but no changes\n> UPDATE 0\n> postgres=# select * from a;\n> n\n> ---\n> 1\n> 3\n> 3\n> (3 rows)\n> postgres=#\n> ------------------------\n>\n> Now, there is only value 1 in the sync replica table (no other values), data is not in sync. This is expected, after the LAN restore, data will come sync again, but if the main/primary node will fail and we failover to replica before the LAN is back up or the storage for this node would be destroyed and data would not sync to replica before it, we will lose data even if the client received successful commit (with a warning).\n> From the synchronous_commit=remote_write level and \"higher\", I would expect, that when the remote application (doesn't matter if flush, write or apply) would not be applied I would not receive a confirmation about the commit (even with a warning). Something like, if there is no commit from sync replica, there is no commit on primary and if someone performs the steps above, the whole transaction will not send a confirmation.\n>\n> This can cause issues if the application receives a confirmation about the success and performs some follow-up steps e.g. create a user account and sends a request to the mail system to create an account or create a VPN account. If the scenario above happens, there can exist a VPN account that does not have any presence in the central database and can be a security issue.\n>\n> I hope I explained it sufficiently. :-)\n>\n> Do you think, that would be possible to implement a process that would solve this use case?\n>\n> Thank you\n> Ondrej\n\n\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 20 Apr 2021 19:23:42 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": "\n\nOn 4/20/21 6:23 PM, Aleksander Alekseev wrote:\n> Hi Ondřej,\n> \n> Thanks for the report. It seems to be a clear violation of what is\n> promised in the docs. Although it's unlikely that someone implemented\n> an application which deals with important data and \"pressed Ctr+C\" as\n> it's done in psql. So this might be not such a critical issue after\n> all. BTW what version of PostgreSQL are you using?\n> \n\nWhich part of the docs does this contradict?\n\nWith Ctrl+C the application *did not* receive confirmation - the commit\nwas interrupted before fully completing. In a way, it's about the same\nsituation as if a regular commit was interrupted randomly. It might have\nhappened before the commit log got updated, or maybe right after it,\nwhich determines the outcome.\n\nWhat I find a bit strange is that this inserts 1, 2, 2, 2 locally, and\nyet we end up with just two rows with 2 (before the update). I don't see\nwhy a network outage should have such consequence.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 20 Apr 2021 18:38:01 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": "Hello Aleksander,\n\nThank you for the reaction. This was tested on version 13.2.\n\nThere are also other possible situations with the same setup and similar \nissue:\n\n-----------------\nWhen the background process on server fails....\n\nOn postgresql1:\ntecmint=# select * from a; --> LAN on sync replica is OK\n  id\n----\n   1\n(1 row)\n\ntecmint=# insert into a values (2); ---> LAN on sync replica is DOWN and \ninsert is waiting. During this time kill the background process on the \nPostgreSQL server for this session\nWARNING:  canceling the wait for synchronous replication and terminating \nconnection due to administrator command\nDETAIL:  The transaction has already committed locally, but might not \nhave been replicated to the standby.\nserver closed the connection unexpectedly\n     This probably means the server terminated abnormally\n     before or while processing the request.\nThe connection to the server was lost. Attempting reset: Succeeded.\ntecmint=# select * from a;\n  id\n----\n   1\n   2\n(2 rows)\n\ntecmint=# ---> LAN on sync replica is still DOWN\n\nThe potgres session will restore after the background process failed. \nWhen you run select on master, it still looks OK. But data is still not \nreplicated on the sync replica. If we lost the master now, we would lost \nthis data as well.\n\n**************\nAnother case\n**************\n\nKill the client process.\n\ntecmint=# select * from a;\n  id\n----\n   1\n   2\n   3\n(3 rows)\ntecmint=#                --> Disconnect the sync replica now. LAN on \nreplica is DOWN\ntecmint=# insert into a values (4); --> Kill the client process\nTerminated\nxzizka@service-vm:~$ psql -U postgres -h 192.168.122.6 -p 5432 -d tecmint\nPassword for user postgres:\npsql (13.2 (Debian 13.2-1.pgdg100+1))\nType \"help\" for help.\n\ntecmint=# select * from a;\n  id\n----\n   1\n   2\n   3\n(3 rows)\n\ntecmint=# --> Number 4 is not there. Now switch the LAN on sync replica ON.\n\n----------\n\nResult from sync replica after the LAN is again UP:\ntecmint=# select * from a;\n  id\n----\n   1\n   2\n   3\n   4\n(4 rows)\n\n\nIn this situation, try to insert the number 4 again to the table.\n\ntecmint=# select * from a;\n  id\n----\n   1\n   2\n   3\n(3 rows)\n\ntecmint=# insert into a values (4);\nERROR:  duplicate key value violates unique constraint \"a_pkey\"\nDETAIL:  Key (id)=(4) already exists.\ntecmint=#\n\nThis is really strange... Application can be confused, It is not \npossible to insert record, which is not there, but some systems which \nuse the sync node as a read replica maybe already read that record from \nthe sync replica database and done some steps which can cause issues and \ncan be hard to track.\n\nIf I say, that it would be hard to send the CTRL+C to the database from \nthe client, I need to say, that the 2 situations I described here can \nhappen in real.\n\nWhat do you think?\n\nThank you and regards\nOndrej\n\nOn 20/04/2021 17:23, Aleksander Alekseev wrote:\n> Hi Ondřej,\n>\n> Thanks for the report. It seems to be a clear violation of what is\n> promised in the docs. Although it's unlikely that someone implemented\n> an application which deals with important data and \"pressed Ctr+C\" as\n> it's done in psql. So this might be not such a critical issue after\n> all. BTW what version of PostgreSQL are you using?\n>\n>\n> On Mon, Apr 19, 2021 at 10:13 PM Ondřej Žižka <ondrej.zizka@stratox.cz> wrote:\n>> Hello all,\n>> I would like to know your opinion on the following behaviour I see for PostgreSQL setup with synchronous replication.\n>>\n>> This behaviour happens in a special use case. In this use case, there are 2 synchronous replicas with the following config (truncated):\n>>\n>> - 2 nodes\n>> - synchronous_standby_names='*'\n>> - synchronous_commit=remote_apply\n>>\n>>\n>> With this setup run the following steps (LAN down - LAN between master and replica):\n>> -----------------\n>> postgres=# truncate table a;\n>> TRUNCATE TABLE\n>> postgres=# insert into a values (1); -- LAN up, insert has been applied to replica.\n>> INSERT 0 1\n>> Vypnu LAN na serveru se standby:\n>> postgres=# insert into a values (2); --LAN down, waiting for a confirmation from sync replica. In this situation cancel it (press CTRL+C)\n>> ^CCancel request sent\n>> WARNING: canceling wait for synchronous replication due to user request\n>> DETAIL: The transaction has already committed locally, but might not have been replicated to the standby.\n>> INSERT 0 1\n>> There will be warning that commit was performed only locally:\n>> 2021-04-12 19:55:53.063 CEST [26104] WARNING: canceling wait for synchronous replication due to user request\n>> 2021-04-12 19:55:53.063 CEST [26104] DETAIL: The transaction has already committed locally, but might not have been replicated to the standby.\n>>\n>> postgres=# insert into a values (2); --LAN down, waiting for a confirmation from sync replica. In this situation cancel it (press CTRL+C)\n>> ^CCancel request sent\n>> WARNING: canceling wait for synchronous replication due to user request\n>> DETAIL: The transaction has already committed locally, but might not have been replicated to the standby.\n>> INSERT 0 1\n>> postgres=# insert into a values (2); --LAN down, waiting for sync replica, second attempt, cancel it as well (CTRL+C)\n>> ^CCancel request sent\n>> WARNING: canceling wait for synchronous replication due to user request\n>> DETAIL: The transaction has already committed locally, but might not have been replicated to the standby.\n>> INSERT 0 1\n>> postgres=# update a set n=3 where n=2; --LAN down, waiting for sync replica, cancel it (CTRL+C)\n>> ^CCancel request sent\n>> WARNING: canceling wait for synchronous replication due to user request\n>> DETAIL: The transaction has already committed locally, but might not have been replicated to the standby.\n>> UPDATE 2\n>> postgres=# update a set n=3 where n=2; -- run the same update,because data from the previous attempt was commited on master, it is sucessfull, but no changes\n>> UPDATE 0\n>> postgres=# select * from a;\n>> n\n>> ---\n>> 1\n>> 3\n>> 3\n>> (3 rows)\n>> postgres=#\n>> ------------------------\n>>\n>> Now, there is only value 1 in the sync replica table (no other values), data is not in sync. This is expected, after the LAN restore, data will come sync again, but if the main/primary node will fail and we failover to replica before the LAN is back up or the storage for this node would be destroyed and data would not sync to replica before it, we will lose data even if the client received successful commit (with a warning).\n>> From the synchronous_commit=remote_write level and \"higher\", I would expect, that when the remote application (doesn't matter if flush, write or apply) would not be applied I would not receive a confirmation about the commit (even with a warning). Something like, if there is no commit from sync replica, there is no commit on primary and if someone performs the steps above, the whole transaction will not send a confirmation.\n>>\n>> This can cause issues if the application receives a confirmation about the success and performs some follow-up steps e.g. create a user account and sends a request to the mail system to create an account or create a VPN account. If the scenario above happens, there can exist a VPN account that does not have any presence in the central database and can be a security issue.\n>>\n>> I hope I explained it sufficiently. :-)\n>>\n>> Do you think, that would be possible to implement a process that would solve this use case?\n>>\n>> Thank you\n>> Ondrej\n>\n>\n\n\n", "msg_date": "Tue, 20 Apr 2021 18:49:21 +0100", "msg_from": "=?UTF-8?B?T25kxZllaiDFvWnFvmth?= <ondrej.zizka@stratox.cz>", "msg_from_op": true, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": "Hi!\n\n\nThis is a known issue with synchronous replication [1]. You might inject \ninto unmodified operation some dummy modification to overcome the \nnegative sides of such partially committing without source code patching.\n\n\nOn 20.04.2021 19:23, Aleksander Alekseev wrote:\n> Although it's unlikely that someone implemented\n> an application which deals with important data and \"pressed Ctr+C\" as\n> it's done in psql.\n\n\nSome client libraries have feature to cancel session that has similar \neffect to \"Ctrl+C\" from psql after specified by client deadline \nexpiration [2]. Hence, this case might be quite often when application \ninteracts with database.\n\n\n> On Mon, Apr 19, 2021 at 10:13 PM Ondřej Žižka <ondrej.zizka@stratox.cz> wrote:\n>\n> From the synchronous_commit=remote_write level and \"higher\", I would expect, that when the remote application (doesn't matter if flush, write or apply) would not be applied I would not receive a confirmation about the commit (even with a warning). Something like, if there is no commit from sync replica, there is no commit on primary and if someone performs the steps above, the whole transaction will not send a confirmation.\n\n\nThe warning have to be accounted here and performed commit have not to \nbe treated as *successful*.\n\n\n1. \nhttps://www.postgresql.org/message-id/C1F7905E-5DB2-497D-ABCC-E14D4DEE506C%40yandex-team.ru\n\n2. \nhttps://www.postgresql.org/message-id/CANtu0ogbu%2By6Py963p-zKJ535b8zm5AOq7zkX7wW-tryPYi1DA%40mail.gmail.com\n\n\n-- \nRegards,\nMaksim Milyutin\n\n\n\n", "msg_date": "Tue, 20 Apr 2021 20:51:08 +0300", "msg_from": "Maksim Milyutin <milyutinma@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": "Hello Maksim,\n\nI know your post [1]. That thread is why there we performed more tests \n(see another my email in this thread). We are trying to somehow \nimplement RPO=0 solution using PostgreSQL. Knowing this... Would be \npossible to build RPO=0 solution with PostgreSQL?\n\nOndrej\n\nOn 20/04/2021 18:51, Maksim Milyutin wrote:\n> Hi!\n>\n>\n> This is a known issue with synchronous replication [1]. You might \n> inject into unmodified operation some dummy modification to overcome \n> the negative sides of such partially committing without source code \n> patching.\n>\n>\n> On 20.04.2021 19:23, Aleksander Alekseev wrote:\n>> Although it's unlikely that someone implemented\n>> an application which deals with important data and \"pressed Ctr+C\" as\n>> it's done in psql.\n>\n>\n> Some client libraries have feature to cancel session that has similar \n> effect to \"Ctrl+C\" from psql after specified by client deadline \n> expiration [2]. Hence, this case might be quite often when application \n> interacts with database.\n>\n>\n>> On Mon, Apr 19, 2021 at 10:13 PM Ondřej Žižka \n>> <ondrej.zizka@stratox.cz> wrote:\n>>\n>>  From the synchronous_commit=remote_write level and \"higher\", I would \n>> expect, that when the remote application (doesn't matter if flush, \n>> write or apply) would not be applied I would not receive a \n>> confirmation about the commit (even with a warning). Something like, \n>> if there is no commit from sync replica, there is no commit on \n>> primary and if someone performs the steps above, the whole \n>> transaction will not send a confirmation.\n>\n>\n> The warning have to be accounted here and performed commit have not to \n> be treated as *successful*.\n>\n>\n> 1. \n> https://www.postgresql.org/message-id/C1F7905E-5DB2-497D-ABCC-E14D4DEE506C%40yandex-team.ru\n>\n> 2. \n> https://www.postgresql.org/message-id/CANtu0ogbu%2By6Py963p-zKJ535b8zm5AOq7zkX7wW-tryPYi1DA%40mail.gmail.com\n>\n>\n\n\n", "msg_date": "Tue, 20 Apr 2021 19:00:22 +0100", "msg_from": "=?UTF-8?B?T25kxZllaiDFvWnFvmth?= <ondrej.zizka@stratox.cz>", "msg_from_op": true, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": "I am sorry, I forgot mentioned, that in the second situation I added a \nprimary key to the table.\n\nOndrej\n\n\nOn 20/04/2021 18:49, Ondřej Žižka wrote:\n> Hello Aleksander,\n>\n> Thank you for the reaction. This was tested on version 13.2.\n>\n> There are also other possible situations with the same setup and \n> similar issue:\n>\n> -----------------\n> When the background process on server fails....\n>\n> On postgresql1:\n> tecmint=# select * from a; --> LAN on sync replica is OK\n>  id\n> ----\n>   1\n> (1 row)\n>\n> tecmint=# insert into a values (2); ---> LAN on sync replica is DOWN \n> and insert is waiting. During this time kill the background process on \n> the PostgreSQL server for this session\n> WARNING:  canceling the wait for synchronous replication and \n> terminating connection due to administrator command\n> DETAIL:  The transaction has already committed locally, but might not \n> have been replicated to the standby.\n> server closed the connection unexpectedly\n>     This probably means the server terminated abnormally\n>     before or while processing the request.\n> The connection to the server was lost. Attempting reset: Succeeded.\n> tecmint=# select * from a;\n>  id\n> ----\n>   1\n>   2\n> (2 rows)\n>\n> tecmint=# ---> LAN on sync replica is still DOWN\n>\n> The potgres session will restore after the background process failed. \n> When you run select on master, it still looks OK. But data is still \n> not replicated on the sync replica. If we lost the master now, we \n> would lost this data as well.\n>\n> **************\n> Another case\n> **************\n>\n> Kill the client process.\n>\n> tecmint=# select * from a;\n>  id\n> ----\n>   1\n>   2\n>   3\n> (3 rows)\n> tecmint=#                --> Disconnect the sync replica now. LAN on \n> replica is DOWN\n> tecmint=# insert into a values (4); --> Kill the client process\n> Terminated\n> xzizka@service-vm:~$ psql -U postgres -h 192.168.122.6 -p 5432 -d tecmint\n> Password for user postgres:\n> psql (13.2 (Debian 13.2-1.pgdg100+1))\n> Type \"help\" for help.\n>\n> tecmint=# select * from a;\n>  id\n> ----\n>   1\n>   2\n>   3\n> (3 rows)\n>\n> tecmint=# --> Number 4 is not there. Now switch the LAN on sync \n> replica ON.\n>\n> ----------\n>\n> Result from sync replica after the LAN is again UP:\n> tecmint=# select * from a;\n>  id\n> ----\n>   1\n>   2\n>   3\n>   4\n> (4 rows)\n>\n>\n> In this situation, try to insert the number 4 again to the table.\n>\n> tecmint=# select * from a;\n>  id\n> ----\n>   1\n>   2\n>   3\n> (3 rows)\n>\n> tecmint=# insert into a values (4);\n> ERROR:  duplicate key value violates unique constraint \"a_pkey\"\n> DETAIL:  Key (id)=(4) already exists.\n> tecmint=#\n>\n> This is really strange... Application can be confused, It is not \n> possible to insert record, which is not there, but some systems which \n> use the sync node as a read replica maybe already read that record \n> from the sync replica database and done some steps which can cause \n> issues and can be hard to track.\n>\n> If I say, that it would be hard to send the CTRL+C to the database \n> from the client, I need to say, that the 2 situations I described here \n> can happen in real.\n>\n> What do you think?\n>\n> Thank you and regards\n> Ondrej\n>\n> On 20/04/2021 17:23, Aleksander Alekseev wrote:\n>> Hi Ondřej,\n>>\n>> Thanks for the report. It seems to be a clear violation of what is\n>> promised in the docs. Although it's unlikely that someone implemented\n>> an application which deals with important data and \"pressed Ctr+C\" as\n>> it's done in psql. So this might be not such a critical issue after\n>> all. BTW what version of PostgreSQL are you using?\n>>\n>>\n>> On Mon, Apr 19, 2021 at 10:13 PM Ondřej Žižka \n>> <ondrej.zizka@stratox.cz> wrote:\n>>> Hello all,\n>>> I would like to know your opinion on the following behaviour I see \n>>> for PostgreSQL setup with synchronous replication.\n>>>\n>>> This behaviour happens in a special use case. In this use case, \n>>> there are 2 synchronous replicas with the following config (truncated):\n>>>\n>>> - 2 nodes\n>>> - synchronous_standby_names='*'\n>>> - synchronous_commit=remote_apply\n>>>\n>>>\n>>> With this setup run the following steps (LAN down - LAN between \n>>> master and replica):\n>>> -----------------\n>>> postgres=# truncate table a;\n>>> TRUNCATE TABLE\n>>> postgres=# insert into a values (1); -- LAN up, insert has been \n>>> applied to replica.\n>>> INSERT 0 1\n>>> Vypnu LAN na serveru se standby:\n>>> postgres=# insert into a values (2); --LAN down, waiting for a \n>>> confirmation from sync replica. In this situation cancel it (press \n>>> CTRL+C)\n>>> ^CCancel request sent\n>>> WARNING:  canceling wait for synchronous replication due to user \n>>> request\n>>> DETAIL:  The transaction has already committed locally, but might \n>>> not have been replicated to the standby.\n>>> INSERT 0 1\n>>> There will be warning that commit was performed only locally:\n>>> 2021-04-12 19:55:53.063 CEST [26104] WARNING:  canceling wait for \n>>> synchronous replication due to user request\n>>> 2021-04-12 19:55:53.063 CEST [26104] DETAIL:  The transaction has \n>>> already committed locally, but might not have been replicated to the \n>>> standby.\n>>>\n>>> postgres=# insert into a values (2); --LAN down, waiting for a \n>>> confirmation from sync replica. In this situation cancel it (press \n>>> CTRL+C)\n>>> ^CCancel request sent\n>>> WARNING:  canceling wait for synchronous replication due to user \n>>> request\n>>> DETAIL:  The transaction has already committed locally, but might \n>>> not have been replicated to the standby.\n>>> INSERT 0 1\n>>> postgres=# insert into a values (2); --LAN down, waiting for sync \n>>> replica, second attempt, cancel it as well (CTRL+C)\n>>> ^CCancel request sent\n>>> WARNING:  canceling wait for synchronous replication due to user \n>>> request\n>>> DETAIL:  The transaction has already committed locally, but might \n>>> not have been replicated to the standby.\n>>> INSERT 0 1\n>>> postgres=# update a set n=3 where n=2; --LAN down, waiting for sync \n>>> replica, cancel it (CTRL+C)\n>>> ^CCancel request sent\n>>> WARNING:  canceling wait for synchronous replication due to user \n>>> request\n>>> DETAIL:  The transaction has already committed locally, but might \n>>> not have been replicated to the standby.\n>>> UPDATE 2\n>>> postgres=# update a set n=3 where n=2; -- run the same \n>>> update,because data from the previous attempt was commited on \n>>> master, it is sucessfull, but no changes\n>>> UPDATE 0\n>>> postgres=# select * from a;\n>>>   n\n>>> ---\n>>>   1\n>>>   3\n>>>   3\n>>> (3 rows)\n>>> postgres=#\n>>> ------------------------\n>>>\n>>> Now, there is only value 1 in the sync replica table (no other \n>>> values), data is not in sync. This is expected, after the LAN \n>>> restore, data will come sync again, but if the main/primary node \n>>> will fail and we failover to replica before the LAN is back up or \n>>> the storage for this node would be destroyed and data would not sync \n>>> to replica before it, we will lose data even if the client received \n>>> successful commit (with a warning).\n>>>  From the synchronous_commit=remote_write level and \"higher\", I \n>>> would expect, that when the remote application (doesn't matter if \n>>> flush, write or apply) would not be applied I would not receive a \n>>> confirmation about the commit (even with a warning). Something like, \n>>> if there is no commit from sync replica, there is no commit on \n>>> primary and if someone performs the steps above, the whole \n>>> transaction will not send a confirmation.\n>>>\n>>> This can cause issues if the application receives a confirmation \n>>> about the success and performs some follow-up steps e.g. create a \n>>> user account and sends a request to the mail system to create an \n>>> account or create a VPN account. If the scenario above happens, \n>>> there can exist a VPN account that does not have any presence in the \n>>> central database and can be a security issue.\n>>>\n>>> I hope I explained it sufficiently. :-)\n>>>\n>>> Do you think, that would be possible to implement a process that \n>>> would solve this use case?\n>>>\n>>> Thank you\n>>> Ondrej\n>>\n>>\n\n\n", "msg_date": "Tue, 20 Apr 2021 19:05:31 +0100", "msg_from": "=?UTF-8?B?T25kxZllaiDFvWnFvmth?= <ondrej.zizka@stratox.cz>", "msg_from_op": true, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": "\nOn 20.04.2021 19:38, Tomas Vondra wrote:\n>\n> On 4/20/21 6:23 PM, Aleksander Alekseev wrote:\n>> Hi Ondřej,\n>>\n>> Thanks for the report. It seems to be a clear violation of what is\n>> promised in the docs. Although it's unlikely that someone implemented\n>> an application which deals with important data and \"pressed Ctr+C\" as\n>> it's done in psql. So this might be not such a critical issue after\n>> all. BTW what version of PostgreSQL are you using?\n>>\n> Which part of the docs does this contradict?\n\n\nI think, Aleksandr refers to the following phrase in docs:\n\n\"The guarantee we offer is that the application will not receive \nexplicit acknowledgment of the successful commit of a transaction until \nthe WAL data is known to be safely received by all the synchronous \nstandbys.\" [1]\n\nAnd IMO confusing here regards to the notion of `successful commit`. \nDoes warning attached to received commit message make it not \n*successful*? I think we have to explicitly mention cases about \ncancellation and termination session in docs to avoid ambiguity in \nunderstanding of phrase above.\n\n\n1. \nhttps://www.postgresql.org/docs/current/warm-standby.html#SYNCHRONOUS-REPLICATION-HA\n\n-- \nRegards,\nMaksim Milyutin\n\n\n\n", "msg_date": "Tue, 20 Apr 2021 21:18:12 +0300", "msg_from": "Maksim Milyutin <milyutinma@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": "One idea here is to make the backend ignore query cancellation/backend\ntermination while waiting for the synchronous commit ACK. This way client\nnever reads the data that was never flushed remotely. The problem with this\napproach is that your backends get stuck until your commit log record is\nflushed on the remote side. Also, the client can see the data not flushed\nremotely if the server crashes and comes back online. You can prevent the\nlatter case by making a SyncRepWaitForLSN before opening up the connections\nto the non-superusers. I have a working prototype of this logic, if there\nis enough interest I can post the patch.\n\n\n\n\n\nOn Tue, Apr 20, 2021 at 11:25 AM Ondřej Žižka <ondrej.zizka@stratox.cz>\nwrote:\n\n> I am sorry, I forgot mentioned, that in the second situation I added a\n> primary key to the table.\n>\n> Ondrej\n>\n>\n> On 20/04/2021 18:49, Ondřej Žižka wrote:\n> > Hello Aleksander,\n> >\n> > Thank you for the reaction. This was tested on version 13.2.\n> >\n> > There are also other possible situations with the same setup and\n> > similar issue:\n> >\n> > -----------------\n> > When the background process on server fails....\n> >\n> > On postgresql1:\n> > tecmint=# select * from a; --> LAN on sync replica is OK\n> > id\n> > ----\n> > 1\n> > (1 row)\n> >\n> > tecmint=# insert into a values (2); ---> LAN on sync replica is DOWN\n> > and insert is waiting. During this time kill the background process on\n> > the PostgreSQL server for this session\n> > WARNING: canceling the wait for synchronous replication and\n> > terminating connection due to administrator command\n> > DETAIL: The transaction has already committed locally, but might not\n> > have been replicated to the standby.\n> > server closed the connection unexpectedly\n> > This probably means the server terminated abnormally\n> > before or while processing the request.\n> > The connection to the server was lost. Attempting reset: Succeeded.\n> > tecmint=# select * from a;\n> > id\n> > ----\n> > 1\n> > 2\n> > (2 rows)\n> >\n> > tecmint=# ---> LAN on sync replica is still DOWN\n> >\n> > The potgres session will restore after the background process failed.\n> > When you run select on master, it still looks OK. But data is still\n> > not replicated on the sync replica. If we lost the master now, we\n> > would lost this data as well.\n> >\n> > **************\n> > Another case\n> > **************\n> >\n> > Kill the client process.\n> >\n> > tecmint=# select * from a;\n> > id\n> > ----\n> > 1\n> > 2\n> > 3\n> > (3 rows)\n> > tecmint=# --> Disconnect the sync replica now. LAN on\n> > replica is DOWN\n> > tecmint=# insert into a values (4); --> Kill the client process\n> > Terminated\n> > xzizka@service-vm:~$ psql -U postgres -h 192.168.122.6 -p 5432 -d\n> tecmint\n> > Password for user postgres:\n> > psql (13.2 (Debian 13.2-1.pgdg100+1))\n> > Type \"help\" for help.\n> >\n> > tecmint=# select * from a;\n> > id\n> > ----\n> > 1\n> > 2\n> > 3\n> > (3 rows)\n> >\n> > tecmint=# --> Number 4 is not there. Now switch the LAN on sync\n> > replica ON.\n> >\n> > ----------\n> >\n> > Result from sync replica after the LAN is again UP:\n> > tecmint=# select * from a;\n> > id\n> > ----\n> > 1\n> > 2\n> > 3\n> > 4\n> > (4 rows)\n> >\n> >\n> > In this situation, try to insert the number 4 again to the table.\n> >\n> > tecmint=# select * from a;\n> > id\n> > ----\n> > 1\n> > 2\n> > 3\n> > (3 rows)\n> >\n> > tecmint=# insert into a values (4);\n> > ERROR: duplicate key value violates unique constraint \"a_pkey\"\n> > DETAIL: Key (id)=(4) already exists.\n> > tecmint=#\n> >\n> > This is really strange... Application can be confused, It is not\n> > possible to insert record, which is not there, but some systems which\n> > use the sync node as a read replica maybe already read that record\n> > from the sync replica database and done some steps which can cause\n> > issues and can be hard to track.\n> >\n> > If I say, that it would be hard to send the CTRL+C to the database\n> > from the client, I need to say, that the 2 situations I described here\n> > can happen in real.\n> >\n> > What do you think?\n> >\n> > Thank you and regards\n> > Ondrej\n> >\n> > On 20/04/2021 17:23, Aleksander Alekseev wrote:\n> >> Hi Ondřej,\n> >>\n> >> Thanks for the report. It seems to be a clear violation of what is\n> >> promised in the docs. Although it's unlikely that someone implemented\n> >> an application which deals with important data and \"pressed Ctr+C\" as\n> >> it's done in psql. So this might be not such a critical issue after\n> >> all. BTW what version of PostgreSQL are you using?\n> >>\n> >>\n> >> On Mon, Apr 19, 2021 at 10:13 PM Ondřej Žižka\n> >> <ondrej.zizka@stratox.cz> wrote:\n> >>> Hello all,\n> >>> I would like to know your opinion on the following behaviour I see\n> >>> for PostgreSQL setup with synchronous replication.\n> >>>\n> >>> This behaviour happens in a special use case. In this use case,\n> >>> there are 2 synchronous replicas with the following config (truncated):\n> >>>\n> >>> - 2 nodes\n> >>> - synchronous_standby_names='*'\n> >>> - synchronous_commit=remote_apply\n> >>>\n> >>>\n> >>> With this setup run the following steps (LAN down - LAN between\n> >>> master and replica):\n> >>> -----------------\n> >>> postgres=# truncate table a;\n> >>> TRUNCATE TABLE\n> >>> postgres=# insert into a values (1); -- LAN up, insert has been\n> >>> applied to replica.\n> >>> INSERT 0 1\n> >>> Vypnu LAN na serveru se standby:\n> >>> postgres=# insert into a values (2); --LAN down, waiting for a\n> >>> confirmation from sync replica. In this situation cancel it (press\n> >>> CTRL+C)\n> >>> ^CCancel request sent\n> >>> WARNING: canceling wait for synchronous replication due to user\n> >>> request\n> >>> DETAIL: The transaction has already committed locally, but might\n> >>> not have been replicated to the standby.\n> >>> INSERT 0 1\n> >>> There will be warning that commit was performed only locally:\n> >>> 2021-04-12 19:55:53.063 CEST [26104] WARNING: canceling wait for\n> >>> synchronous replication due to user request\n> >>> 2021-04-12 19:55:53.063 CEST [26104] DETAIL: The transaction has\n> >>> already committed locally, but might not have been replicated to the\n> >>> standby.\n> >>>\n> >>> postgres=# insert into a values (2); --LAN down, waiting for a\n> >>> confirmation from sync replica. In this situation cancel it (press\n> >>> CTRL+C)\n> >>> ^CCancel request sent\n> >>> WARNING: canceling wait for synchronous replication due to user\n> >>> request\n> >>> DETAIL: The transaction has already committed locally, but might\n> >>> not have been replicated to the standby.\n> >>> INSERT 0 1\n> >>> postgres=# insert into a values (2); --LAN down, waiting for sync\n> >>> replica, second attempt, cancel it as well (CTRL+C)\n> >>> ^CCancel request sent\n> >>> WARNING: canceling wait for synchronous replication due to user\n> >>> request\n> >>> DETAIL: The transaction has already committed locally, but might\n> >>> not have been replicated to the standby.\n> >>> INSERT 0 1\n> >>> postgres=# update a set n=3 where n=2; --LAN down, waiting for sync\n> >>> replica, cancel it (CTRL+C)\n> >>> ^CCancel request sent\n> >>> WARNING: canceling wait for synchronous replication due to user\n> >>> request\n> >>> DETAIL: The transaction has already committed locally, but might\n> >>> not have been replicated to the standby.\n> >>> UPDATE 2\n> >>> postgres=# update a set n=3 where n=2; -- run the same\n> >>> update,because data from the previous attempt was commited on\n> >>> master, it is sucessfull, but no changes\n> >>> UPDATE 0\n> >>> postgres=# select * from a;\n> >>> n\n> >>> ---\n> >>> 1\n> >>> 3\n> >>> 3\n> >>> (3 rows)\n> >>> postgres=#\n> >>> ------------------------\n> >>>\n> >>> Now, there is only value 1 in the sync replica table (no other\n> >>> values), data is not in sync. This is expected, after the LAN\n> >>> restore, data will come sync again, but if the main/primary node\n> >>> will fail and we failover to replica before the LAN is back up or\n> >>> the storage for this node would be destroyed and data would not sync\n> >>> to replica before it, we will lose data even if the client received\n> >>> successful commit (with a warning).\n> >>> From the synchronous_commit=remote_write level and \"higher\", I\n> >>> would expect, that when the remote application (doesn't matter if\n> >>> flush, write or apply) would not be applied I would not receive a\n> >>> confirmation about the commit (even with a warning). Something like,\n> >>> if there is no commit from sync replica, there is no commit on\n> >>> primary and if someone performs the steps above, the whole\n> >>> transaction will not send a confirmation.\n> >>>\n> >>> This can cause issues if the application receives a confirmation\n> >>> about the success and performs some follow-up steps e.g. create a\n> >>> user account and sends a request to the mail system to create an\n> >>> account or create a VPN account. If the scenario above happens,\n> >>> there can exist a VPN account that does not have any presence in the\n> >>> central database and can be a security issue.\n> >>>\n> >>> I hope I explained it sufficiently. :-)\n> >>>\n> >>> Do you think, that would be possible to implement a process that\n> >>> would solve this use case?\n> >>>\n> >>> Thank you\n> >>> Ondrej\n> >>\n> >>\n>\n>\n>\n\nOne idea here is to make the backend ignore query cancellation/backend termination while waiting for the synchronous commit ACK. This way client never reads the data that was never flushed remotely. The problem with this approach is that your backends get stuck until your commit log record is flushed on the remote side. Also, the client can see the data not flushed remotely if the server crashes and comes back online. You can prevent the latter case by making a SyncRepWaitForLSN before opening up the connections to the non-superusers. I have a working prototype of this logic, if there is enough interest I can post the patch.On Tue, Apr 20, 2021 at 11:25 AM Ondřej Žižka <ondrej.zizka@stratox.cz> wrote:I am sorry, I forgot mentioned, that in the second situation I added a \nprimary key to the table.\n\nOndrej\n\n\nOn 20/04/2021 18:49, Ondřej Žižka wrote:\n> Hello Aleksander,\n>\n> Thank you for the reaction. This was tested on version 13.2.\n>\n> There are also other possible situations with the same setup and \n> similar issue:\n>\n> -----------------\n> When the background process on server fails....\n>\n> On postgresql1:\n> tecmint=# select * from a; --> LAN on sync replica is OK\n>  id\n> ----\n>   1\n> (1 row)\n>\n> tecmint=# insert into a values (2); ---> LAN on sync replica is DOWN \n> and insert is waiting. During this time kill the background process on \n> the PostgreSQL server for this session\n> WARNING:  canceling the wait for synchronous replication and \n> terminating connection due to administrator command\n> DETAIL:  The transaction has already committed locally, but might not \n> have been replicated to the standby.\n> server closed the connection unexpectedly\n>     This probably means the server terminated abnormally\n>     before or while processing the request.\n> The connection to the server was lost. Attempting reset: Succeeded.\n> tecmint=# select * from a;\n>  id\n> ----\n>   1\n>   2\n> (2 rows)\n>\n> tecmint=# ---> LAN on sync replica is still DOWN\n>\n> The potgres session will restore after the background process failed. \n> When you run select on master, it still looks OK. But data is still \n> not replicated on the sync replica. If we lost the master now, we \n> would lost this data as well.\n>\n> **************\n> Another case\n> **************\n>\n> Kill the client process.\n>\n> tecmint=# select * from a;\n>  id\n> ----\n>   1\n>   2\n>   3\n> (3 rows)\n> tecmint=#                --> Disconnect the sync replica now. LAN on \n> replica is DOWN\n> tecmint=# insert into a values (4); --> Kill the client process\n> Terminated\n> xzizka@service-vm:~$ psql -U postgres -h 192.168.122.6 -p 5432 -d tecmint\n> Password for user postgres:\n> psql (13.2 (Debian 13.2-1.pgdg100+1))\n> Type \"help\" for help.\n>\n> tecmint=# select * from a;\n>  id\n> ----\n>   1\n>   2\n>   3\n> (3 rows)\n>\n> tecmint=# --> Number 4 is not there. Now switch the LAN on sync \n> replica ON.\n>\n> ----------\n>\n> Result from sync replica after the LAN is again UP:\n> tecmint=# select * from a;\n>  id\n> ----\n>   1\n>   2\n>   3\n>   4\n> (4 rows)\n>\n>\n> In this situation, try to insert the number 4 again to the table.\n>\n> tecmint=# select * from a;\n>  id\n> ----\n>   1\n>   2\n>   3\n> (3 rows)\n>\n> tecmint=# insert into a values (4);\n> ERROR:  duplicate key value violates unique constraint \"a_pkey\"\n> DETAIL:  Key (id)=(4) already exists.\n> tecmint=#\n>\n> This is really strange... Application can be confused, It is not \n> possible to insert record, which is not there, but some systems which \n> use the sync node as a read replica maybe already read that record \n> from the sync replica database and done some steps which can cause \n> issues and can be hard to track.\n>\n> If I say, that it would be hard to send the CTRL+C to the database \n> from the client, I need to say, that the 2 situations I described here \n> can happen in real.\n>\n> What do you think?\n>\n> Thank you and regards\n> Ondrej\n>\n> On 20/04/2021 17:23, Aleksander Alekseev wrote:\n>> Hi Ondřej,\n>>\n>> Thanks for the report. It seems to be a clear violation of what is\n>> promised in the docs. Although it's unlikely that someone implemented\n>> an application which deals with important data and \"pressed Ctr+C\" as\n>> it's done in psql. So this might be not such a critical issue after\n>> all. BTW what version of PostgreSQL are you using?\n>>\n>>\n>> On Mon, Apr 19, 2021 at 10:13 PM Ondřej Žižka \n>> <ondrej.zizka@stratox.cz> wrote:\n>>> Hello all,\n>>> I would like to know your opinion on the following behaviour I see \n>>> for PostgreSQL setup with synchronous replication.\n>>>\n>>> This behaviour happens in a special use case. In this use case, \n>>> there are 2 synchronous replicas with the following config (truncated):\n>>>\n>>> - 2 nodes\n>>> - synchronous_standby_names='*'\n>>> - synchronous_commit=remote_apply\n>>>\n>>>\n>>> With this setup run the following steps (LAN down - LAN between \n>>> master and replica):\n>>> -----------------\n>>> postgres=# truncate table a;\n>>> TRUNCATE TABLE\n>>> postgres=# insert into a values (1); -- LAN up, insert has been \n>>> applied to replica.\n>>> INSERT 0 1\n>>> Vypnu LAN na serveru se standby:\n>>> postgres=# insert into a values (2); --LAN down, waiting for a \n>>> confirmation from sync replica. In this situation cancel it (press \n>>> CTRL+C)\n>>> ^CCancel request sent\n>>> WARNING:  canceling wait for synchronous replication due to user \n>>> request\n>>> DETAIL:  The transaction has already committed locally, but might \n>>> not have been replicated to the standby.\n>>> INSERT 0 1\n>>> There will be warning that commit was performed only locally:\n>>> 2021-04-12 19:55:53.063 CEST [26104] WARNING:  canceling wait for \n>>> synchronous replication due to user request\n>>> 2021-04-12 19:55:53.063 CEST [26104] DETAIL:  The transaction has \n>>> already committed locally, but might not have been replicated to the \n>>> standby.\n>>>\n>>> postgres=# insert into a values (2); --LAN down, waiting for a \n>>> confirmation from sync replica. In this situation cancel it (press \n>>> CTRL+C)\n>>> ^CCancel request sent\n>>> WARNING:  canceling wait for synchronous replication due to user \n>>> request\n>>> DETAIL:  The transaction has already committed locally, but might \n>>> not have been replicated to the standby.\n>>> INSERT 0 1\n>>> postgres=# insert into a values (2); --LAN down, waiting for sync \n>>> replica, second attempt, cancel it as well (CTRL+C)\n>>> ^CCancel request sent\n>>> WARNING:  canceling wait for synchronous replication due to user \n>>> request\n>>> DETAIL:  The transaction has already committed locally, but might \n>>> not have been replicated to the standby.\n>>> INSERT 0 1\n>>> postgres=# update a set n=3 where n=2; --LAN down, waiting for sync \n>>> replica, cancel it (CTRL+C)\n>>> ^CCancel request sent\n>>> WARNING:  canceling wait for synchronous replication due to user \n>>> request\n>>> DETAIL:  The transaction has already committed locally, but might \n>>> not have been replicated to the standby.\n>>> UPDATE 2\n>>> postgres=# update a set n=3 where n=2; -- run the same \n>>> update,because data from the previous attempt was commited on \n>>> master, it is sucessfull, but no changes\n>>> UPDATE 0\n>>> postgres=# select * from a;\n>>>   n\n>>> ---\n>>>   1\n>>>   3\n>>>   3\n>>> (3 rows)\n>>> postgres=#\n>>> ------------------------\n>>>\n>>> Now, there is only value 1 in the sync replica table (no other \n>>> values), data is not in sync. This is expected, after the LAN \n>>> restore, data will come sync again, but if the main/primary node \n>>> will fail and we failover to replica before the LAN is back up or \n>>> the storage for this node would be destroyed and data would not sync \n>>> to replica before it, we will lose data even if the client received \n>>> successful commit (with a warning).\n>>>  From the synchronous_commit=remote_write level and \"higher\", I \n>>> would expect, that when the remote application (doesn't matter if \n>>> flush, write or apply) would not be applied I would not receive a \n>>> confirmation about the commit (even with a warning). Something like, \n>>> if there is no commit from sync replica, there is no commit on \n>>> primary and if someone performs the steps above, the whole \n>>> transaction will not send a confirmation.\n>>>\n>>> This can cause issues if the application receives a confirmation \n>>> about the success and performs some follow-up steps e.g. create a \n>>> user account and sends a request to the mail system to create an \n>>> account or create a VPN account. If the scenario above happens, \n>>> there can exist a VPN account that does not have any presence in the \n>>> central database and can be a security issue.\n>>>\n>>> I hope I explained it sufficiently. :-)\n>>>\n>>> Do you think, that would be possible to implement a process that \n>>> would solve this use case?\n>>>\n>>> Thank you\n>>> Ondrej\n>>\n>>", "msg_date": "Tue, 20 Apr 2021 14:19:48 -0700", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": "Hello Satyanarayana,\n\nThis can be an option for us in our case. But there also needs to be a \nprocess how to detect these \"stuck commits\" and how to invalidate/remove \nthem, because in reality, if the app/user would not see the change in \nthe database, it/he/she will try to insert/delete it again. If it just \nstuck without management, it will create a queue which can cause, that \nin the queue there will be 2 similar inserts/deletes which can again \ncause issues (like with the primary key I mentioned before).\n\nSo the process should be in this case:\n\n- DBA receives information, that write operations stuck (DBA in \ncoordination with the infrastructure team disconnects all clients and \nprevent new ones to create a new connection).\n- DBA will recognize, that there is an issue in communication between \nthe primary and the sync replica (caused the issue with the propagation \nof commits)\n- DBA will see that there are some commits that are in the \"stuck state\"\n- DBA removes these stuck commits. Note: Because the client never \nreceived a confirmation about the successful commit -> changes in the DB \nclient tried to perform can't be considered as successful.\n- DBA and infrastructure team restore the communication between server \nnodes to be able to propagate commits from the primary node to sync replica.\n- DBA and infrastructure team allows new connections to the database\n\nThis approach would require external monitoring and alerting, but I \nwould say, that this is an acceptable solution. Would your patch be able \nto perform that?\n\nThank you\nOndrej\n\n\nOn 20/04/2021 22:19, SATYANARAYANA NARLAPURAM wrote:\n> One idea here is to make the backend ignore query cancellation/backend \n> termination while waiting for the synchronous commit ACK. This way \n> client never reads the data that was never flushed remotely. The \n> problem with this approach is that your backends get stuck until your \n> commit log record is flushed on the remote side. Also, the client can \n> see the data not flushed remotely if the server crashes and comes back \n> online. You can prevent the latter case by making a SyncRepWaitForLSN \n> before opening up the connections to the non-superusers. I have a \n> working prototype of this logic, if there is enough interest I can \n> post the patch.\n>\n>\n>\n>\n>\n> On Tue, Apr 20, 2021 at 11:25 AM Ondřej Žižka <ondrej.zizka@stratox.cz \n> <mailto:ondrej.zizka@stratox.cz>> wrote:\n>\n> I am sorry, I forgot mentioned, that in the second situation I\n> added a\n> primary key to the table.\n>\n> Ondrej\n>\n>\n> On 20/04/2021 18:49, Ondřej Žižka wrote:\n> > Hello Aleksander,\n> >\n> > Thank you for the reaction. This was tested on version 13.2.\n> >\n> > There are also other possible situations with the same setup and\n> > similar issue:\n> >\n> > -----------------\n> > When the background process on server fails....\n> >\n> > On postgresql1:\n> > tecmint=# select * from a; --> LAN on sync replica is OK\n> >  id\n> > ----\n> >   1\n> > (1 row)\n> >\n> > tecmint=# insert into a values (2); ---> LAN on sync replica is\n> DOWN\n> > and insert is waiting. During this time kill the background\n> process on\n> > the PostgreSQL server for this session\n> > WARNING:  canceling the wait for synchronous replication and\n> > terminating connection due to administrator command\n> > DETAIL:  The transaction has already committed locally, but\n> might not\n> > have been replicated to the standby.\n> > server closed the connection unexpectedly\n> >     This probably means the server terminated abnormally\n> >     before or while processing the request.\n> > The connection to the server was lost. Attempting reset: Succeeded.\n> > tecmint=# select * from a;\n> >  id\n> > ----\n> >   1\n> >   2\n> > (2 rows)\n> >\n> > tecmint=# ---> LAN on sync replica is still DOWN\n> >\n> > The potgres session will restore after the background process\n> failed.\n> > When you run select on master, it still looks OK. But data is still\n> > not replicated on the sync replica. If we lost the master now, we\n> > would lost this data as well.\n> >\n> > **************\n> > Another case\n> > **************\n> >\n> > Kill the client process.\n> >\n> > tecmint=# select * from a;\n> >  id\n> > ----\n> >   1\n> >   2\n> >   3\n> > (3 rows)\n> > tecmint=#                --> Disconnect the sync replica now.\n> LAN on\n> > replica is DOWN\n> > tecmint=# insert into a values (4); --> Kill the client process\n> > Terminated\n> > xzizka@service-vm:~$ psql -U postgres -h 192.168.122.6 -p 5432\n> -d tecmint\n> > Password for user postgres:\n> > psql (13.2 (Debian 13.2-1.pgdg100+1))\n> > Type \"help\" for help.\n> >\n> > tecmint=# select * from a;\n> >  id\n> > ----\n> >   1\n> >   2\n> >   3\n> > (3 rows)\n> >\n> > tecmint=# --> Number 4 is not there. Now switch the LAN on sync\n> > replica ON.\n> >\n> > ----------\n> >\n> > Result from sync replica after the LAN is again UP:\n> > tecmint=# select * from a;\n> >  id\n> > ----\n> >   1\n> >   2\n> >   3\n> >   4\n> > (4 rows)\n> >\n> >\n> > In this situation, try to insert the number 4 again to the table.\n> >\n> > tecmint=# select * from a;\n> >  id\n> > ----\n> >   1\n> >   2\n> >   3\n> > (3 rows)\n> >\n> > tecmint=# insert into a values (4);\n> > ERROR:  duplicate key value violates unique constraint \"a_pkey\"\n> > DETAIL:  Key (id)=(4) already exists.\n> > tecmint=#\n> >\n> > This is really strange... Application can be confused, It is not\n> > possible to insert record, which is not there, but some systems\n> which\n> > use the sync node as a read replica maybe already read that record\n> > from the sync replica database and done some steps which can cause\n> > issues and can be hard to track.\n> >\n> > If I say, that it would be hard to send the CTRL+C to the database\n> > from the client, I need to say, that the 2 situations I\n> described here\n> > can happen in real.\n> >\n> > What do you think?\n> >\n> > Thank you and regards\n> > Ondrej\n> >\n> > On 20/04/2021 17:23, Aleksander Alekseev wrote:\n> >> Hi Ondřej,\n> >>\n> >> Thanks for the report. It seems to be a clear violation of what is\n> >> promised in the docs. Although it's unlikely that someone\n> implemented\n> >> an application which deals with important data and \"pressed\n> Ctr+C\" as\n> >> it's done in psql. So this might be not such a critical issue after\n> >> all. BTW what version of PostgreSQL are you using?\n> >>\n> >>\n> >> On Mon, Apr 19, 2021 at 10:13 PM Ondřej Žižka\n> >> <ondrej.zizka@stratox.cz <mailto:ondrej.zizka@stratox.cz>> wrote:\n> >>> Hello all,\n> >>> I would like to know your opinion on the following behaviour I\n> see\n> >>> for PostgreSQL setup with synchronous replication.\n> >>>\n> >>> This behaviour happens in a special use case. In this use case,\n> >>> there are 2 synchronous replicas with the following config\n> (truncated):\n> >>>\n> >>> - 2 nodes\n> >>> - synchronous_standby_names='*'\n> >>> - synchronous_commit=remote_apply\n> >>>\n> >>>\n> >>> With this setup run the following steps (LAN down - LAN between\n> >>> master and replica):\n> >>> -----------------\n> >>> postgres=# truncate table a;\n> >>> TRUNCATE TABLE\n> >>> postgres=# insert into a values (1); -- LAN up, insert has been\n> >>> applied to replica.\n> >>> INSERT 0 1\n> >>> Vypnu LAN na serveru se standby:\n> >>> postgres=# insert into a values (2); --LAN down, waiting for a\n> >>> confirmation from sync replica. In this situation cancel it\n> (press\n> >>> CTRL+C)\n> >>> ^CCancel request sent\n> >>> WARNING:  canceling wait for synchronous replication due to user\n> >>> request\n> >>> DETAIL:  The transaction has already committed locally, but might\n> >>> not have been replicated to the standby.\n> >>> INSERT 0 1\n> >>> There will be warning that commit was performed only locally:\n> >>> 2021-04-12 19:55:53.063 CEST [26104] WARNING: canceling wait for\n> >>> synchronous replication due to user request\n> >>> 2021-04-12 19:55:53.063 CEST [26104] DETAIL:  The transaction has\n> >>> already committed locally, but might not have been replicated\n> to the\n> >>> standby.\n> >>>\n> >>> postgres=# insert into a values (2); --LAN down, waiting for a\n> >>> confirmation from sync replica. In this situation cancel it\n> (press\n> >>> CTRL+C)\n> >>> ^CCancel request sent\n> >>> WARNING:  canceling wait for synchronous replication due to user\n> >>> request\n> >>> DETAIL:  The transaction has already committed locally, but might\n> >>> not have been replicated to the standby.\n> >>> INSERT 0 1\n> >>> postgres=# insert into a values (2); --LAN down, waiting for sync\n> >>> replica, second attempt, cancel it as well (CTRL+C)\n> >>> ^CCancel request sent\n> >>> WARNING:  canceling wait for synchronous replication due to user\n> >>> request\n> >>> DETAIL:  The transaction has already committed locally, but might\n> >>> not have been replicated to the standby.\n> >>> INSERT 0 1\n> >>> postgres=# update a set n=3 where n=2; --LAN down, waiting for\n> sync\n> >>> replica, cancel it (CTRL+C)\n> >>> ^CCancel request sent\n> >>> WARNING:  canceling wait for synchronous replication due to user\n> >>> request\n> >>> DETAIL:  The transaction has already committed locally, but might\n> >>> not have been replicated to the standby.\n> >>> UPDATE 2\n> >>> postgres=# update a set n=3 where n=2; -- run the same\n> >>> update,because data from the previous attempt was commited on\n> >>> master, it is sucessfull, but no changes\n> >>> UPDATE 0\n> >>> postgres=# select * from a;\n> >>>   n\n> >>> ---\n> >>>   1\n> >>>   3\n> >>>   3\n> >>> (3 rows)\n> >>> postgres=#\n> >>> ------------------------\n> >>>\n> >>> Now, there is only value 1 in the sync replica table (no other\n> >>> values), data is not in sync. This is expected, after the LAN\n> >>> restore, data will come sync again, but if the main/primary node\n> >>> will fail and we failover to replica before the LAN is back up or\n> >>> the storage for this node would be destroyed and data would\n> not sync\n> >>> to replica before it, we will lose data even if the client\n> received\n> >>> successful commit (with a warning).\n> >>>  From the synchronous_commit=remote_write level and \"higher\", I\n> >>> would expect, that when the remote application (doesn't matter if\n> >>> flush, write or apply) would not be applied I would not receive a\n> >>> confirmation about the commit (even with a warning). Something\n> like,\n> >>> if there is no commit from sync replica, there is no commit on\n> >>> primary and if someone performs the steps above, the whole\n> >>> transaction will not send a confirmation.\n> >>>\n> >>> This can cause issues if the application receives a confirmation\n> >>> about the success and performs some follow-up steps e.g. create a\n> >>> user account and sends a request to the mail system to create an\n> >>> account or create a VPN account. If the scenario above happens,\n> >>> there can exist a VPN account that does not have any presence\n> in the\n> >>> central database and can be a security issue.\n> >>>\n> >>> I hope I explained it sufficiently. :-)\n> >>>\n> >>> Do you think, that would be possible to implement a process that\n> >>> would solve this use case?\n> >>>\n> >>> Thank you\n> >>> Ondrej\n> >>\n> >>\n>\n>\n\n\n\n\n\n\n Hello Satyanarayana,\n\n This can be an option for us in our case. But there also needs to be\n a process how to detect these \"stuck commits\" and how to\n invalidate/remove them, because in reality, if the app/user would\n not see the change in the database, it/he/she will try to\n insert/delete it again. If it just stuck without management, it will\n create a queue which can cause, that in the queue there will be 2\n similar inserts/deletes which can again cause issues (like with the\n primary key I mentioned before).\n\n So the process should be in this case:\n\n - DBA receives information, that write operations stuck (DBA in\n coordination with the infrastructure team disconnects all clients\n and prevent new ones to create a new connection).\n - DBA will recognize, that there is an issue in communication\n between the primary and the sync replica (caused the issue with the\n propagation of commits)\n - DBA will see that there are some commits that are in the \"stuck\n state\"\n - DBA removes these stuck commits. Note: Because the client never\n received a confirmation about the successful commit -> changes in\n the DB client tried to perform can't be considered as successful.\n - DBA and infrastructure team restore the communication between\n server nodes to be able to propagate commits from the primary node\n to sync replica.\n - DBA and infrastructure team allows new connections to the database\n\n This approach would require external monitoring and alerting, but I\n would say, that this is an acceptable solution. Would your patch be\n able to perform that?\n\n Thank you\n Ondrej\n \n\nOn 20/04/2021 22:19, SATYANARAYANA\n NARLAPURAM wrote:\n\n\n\nOne idea here is to make the backend ignore query\n cancellation/backend termination while waiting for the\n synchronous commit ACK. This way client never reads the data\n that was never flushed remotely. The problem with this approach\n is that your backends get stuck until your commit log record is\n flushed on the remote side. Also, the client can see the data\n not flushed remotely if the server crashes and comes back\n online. You can prevent the latter case by making a\n SyncRepWaitForLSN before opening up the connections to the\n non-superusers. I have a working prototype of this logic, if\n there is enough interest I can post the patch.\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nOn Tue, Apr 20, 2021 at 11:25\n AM Ondřej Žižka <ondrej.zizka@stratox.cz>\n wrote:\n\nI\n am sorry, I forgot mentioned, that in the second situation I\n added a \n primary key to the table.\n\n Ondrej\n\n\n On 20/04/2021 18:49, Ondřej Žižka wrote:\n > Hello Aleksander,\n >\n > Thank you for the reaction. This was tested on version\n 13.2.\n >\n > There are also other possible situations with the same\n setup and \n > similar issue:\n >\n > -----------------\n > When the background process on server fails....\n >\n > On postgresql1:\n > tecmint=# select * from a; --> LAN on sync replica is\n OK\n >  id\n > ----\n >   1\n > (1 row)\n >\n > tecmint=# insert into a values (2); ---> LAN on sync\n replica is DOWN \n > and insert is waiting. During this time kill the\n background process on \n > the PostgreSQL server for this session\n > WARNING:  canceling the wait for synchronous replication\n and \n > terminating connection due to administrator command\n > DETAIL:  The transaction has already committed locally,\n but might not \n > have been replicated to the standby.\n > server closed the connection unexpectedly\n >     This probably means the server terminated abnormally\n >     before or while processing the request.\n > The connection to the server was lost. Attempting reset:\n Succeeded.\n > tecmint=# select * from a;\n >  id\n > ----\n >   1\n >   2\n > (2 rows)\n >\n > tecmint=# ---> LAN on sync replica is still DOWN\n >\n > The potgres session will restore after the background\n process failed. \n > When you run select on master, it still looks OK. But\n data is still \n > not replicated on the sync replica. If we lost the master\n now, we \n > would lost this data as well.\n >\n > **************\n > Another case\n > **************\n >\n > Kill the client process.\n >\n > tecmint=# select * from a;\n >  id\n > ----\n >   1\n >   2\n >   3\n > (3 rows)\n > tecmint=#                --> Disconnect the sync\n replica now. LAN on \n > replica is DOWN\n > tecmint=# insert into a values (4); --> Kill the\n client process\n > Terminated\n > xzizka@service-vm:~$ psql -U postgres -h 192.168.122.6 -p\n 5432 -d tecmint\n > Password for user postgres:\n > psql (13.2 (Debian 13.2-1.pgdg100+1))\n > Type \"help\" for help.\n >\n > tecmint=# select * from a;\n >  id\n > ----\n >   1\n >   2\n >   3\n > (3 rows)\n >\n > tecmint=# --> Number 4 is not there. Now switch the\n LAN on sync \n > replica ON.\n >\n > ----------\n >\n > Result from sync replica after the LAN is again UP:\n > tecmint=# select * from a;\n >  id\n > ----\n >   1\n >   2\n >   3\n >   4\n > (4 rows)\n >\n >\n > In this situation, try to insert the number 4 again to\n the table.\n >\n > tecmint=# select * from a;\n >  id\n > ----\n >   1\n >   2\n >   3\n > (3 rows)\n >\n > tecmint=# insert into a values (4);\n > ERROR:  duplicate key value violates unique constraint\n \"a_pkey\"\n > DETAIL:  Key (id)=(4) already exists.\n > tecmint=#\n >\n > This is really strange... Application can be confused, It\n is not \n > possible to insert record, which is not there, but some\n systems which \n > use the sync node as a read replica maybe already read\n that record \n > from the sync replica database and done some steps which\n can cause \n > issues and can be hard to track.\n >\n > If I say, that it would be hard to send the CTRL+C to the\n database \n > from the client, I need to say, that the 2 situations I\n described here \n > can happen in real.\n >\n > What do you think?\n >\n > Thank you and regards\n > Ondrej\n >\n > On 20/04/2021 17:23, Aleksander Alekseev wrote:\n >> Hi Ondřej,\n >>\n >> Thanks for the report. It seems to be a clear\n violation of what is\n >> promised in the docs. Although it's unlikely that\n someone implemented\n >> an application which deals with important data and\n \"pressed Ctr+C\" as\n >> it's done in psql. So this might be not such a\n critical issue after\n >> all. BTW what version of PostgreSQL are you using?\n >>\n >>\n >> On Mon, Apr 19, 2021 at 10:13 PM Ondřej Žižka \n >> <ondrej.zizka@stratox.cz>\n wrote:\n >>> Hello all,\n >>> I would like to know your opinion on the\n following behaviour I see \n >>> for PostgreSQL setup with synchronous\n replication.\n >>>\n >>> This behaviour happens in a special use case. In\n this use case, \n >>> there are 2 synchronous replicas with the\n following config (truncated):\n >>>\n >>> - 2 nodes\n >>> - synchronous_standby_names='*'\n >>> - synchronous_commit=remote_apply\n >>>\n >>>\n >>> With this setup run the following steps (LAN down\n - LAN between \n >>> master and replica):\n >>> -----------------\n >>> postgres=# truncate table a;\n >>> TRUNCATE TABLE\n >>> postgres=# insert into a values (1); -- LAN up,\n insert has been \n >>> applied to replica.\n >>> INSERT 0 1\n >>> Vypnu LAN na serveru se standby:\n >>> postgres=# insert into a values (2); --LAN down,\n waiting for a \n >>> confirmation from sync replica. In this situation\n cancel it (press \n >>> CTRL+C)\n >>> ^CCancel request sent\n >>> WARNING:  canceling wait for synchronous\n replication due to user \n >>> request\n >>> DETAIL:  The transaction has already committed\n locally, but might \n >>> not have been replicated to the standby.\n >>> INSERT 0 1\n >>> There will be warning that commit was performed\n only locally:\n >>> 2021-04-12 19:55:53.063 CEST [26104] WARNING: \n canceling wait for \n >>> synchronous replication due to user request\n >>> 2021-04-12 19:55:53.063 CEST [26104] DETAIL:  The\n transaction has \n >>> already committed locally, but might not have\n been replicated to the \n >>> standby.\n >>>\n >>> postgres=# insert into a values (2); --LAN down,\n waiting for a \n >>> confirmation from sync replica. In this situation\n cancel it (press \n >>> CTRL+C)\n >>> ^CCancel request sent\n >>> WARNING:  canceling wait for synchronous\n replication due to user \n >>> request\n >>> DETAIL:  The transaction has already committed\n locally, but might \n >>> not have been replicated to the standby.\n >>> INSERT 0 1\n >>> postgres=# insert into a values (2); --LAN down,\n waiting for sync \n >>> replica, second attempt, cancel it as well\n (CTRL+C)\n >>> ^CCancel request sent\n >>> WARNING:  canceling wait for synchronous\n replication due to user \n >>> request\n >>> DETAIL:  The transaction has already committed\n locally, but might \n >>> not have been replicated to the standby.\n >>> INSERT 0 1\n >>> postgres=# update a set n=3 where n=2; --LAN\n down, waiting for sync \n >>> replica, cancel it (CTRL+C)\n >>> ^CCancel request sent\n >>> WARNING:  canceling wait for synchronous\n replication due to user \n >>> request\n >>> DETAIL:  The transaction has already committed\n locally, but might \n >>> not have been replicated to the standby.\n >>> UPDATE 2\n >>> postgres=# update a set n=3 where n=2; -- run the\n same \n >>> update,because data from the previous attempt was\n commited on \n >>> master, it is sucessfull, but no changes\n >>> UPDATE 0\n >>> postgres=# select * from a;\n >>>   n\n >>> ---\n >>>   1\n >>>   3\n >>>   3\n >>> (3 rows)\n >>> postgres=#\n >>> ------------------------\n >>>\n >>> Now, there is only value 1 in the sync replica\n table (no other \n >>> values), data is not in sync. This is expected,\n after the LAN \n >>> restore, data will come sync again, but if the\n main/primary node \n >>> will fail and we failover to replica before the\n LAN is back up or \n >>> the storage for this node would be destroyed and\n data would not sync \n >>> to replica before it, we will lose data even if\n the client received \n >>> successful commit (with a warning).\n >>>  From the synchronous_commit=remote_write level\n and \"higher\", I \n >>> would expect, that when the remote application\n (doesn't matter if \n >>> flush, write or apply) would not be applied I\n would not receive a \n >>> confirmation about the commit (even with a\n warning). Something like, \n >>> if there is no commit from sync replica, there is\n no commit on \n >>> primary and if someone performs the steps above,\n the whole \n >>> transaction will not send a confirmation.\n >>>\n >>> This can cause issues if the application receives\n a confirmation \n >>> about the success and performs some follow-up\n steps e.g. create a \n >>> user account and sends a request to the mail\n system to create an \n >>> account or create a VPN account. If the scenario\n above happens, \n >>> there can exist a VPN account that does not have\n any presence in the \n >>> central database and can be a security issue.\n >>>\n >>> I hope I explained it sufficiently. :-)\n >>>\n >>> Do you think, that would be possible to implement\n a process that \n >>> would solve this use case?\n >>>\n >>> Thank you\n >>> Ondrej\n >>\n >>", "msg_date": "Wed, 21 Apr 2021 07:25:54 +0100", "msg_from": "=?UTF-8?B?T25kxZllaiDFvWnFvmth?= <ondrej.zizka@stratox.cz>", "msg_from_op": true, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": "Hi Timas,\n\n> > Thanks for the report. It seems to be a clear violation of what is\n> > promised in the docs. Although it's unlikely that someone implemented\n> > an application which deals with important data and \"pressed Ctr+C\" as\n> > it's done in psql. So this might be not such a critical issue after\n> > all. BTW what version of PostgreSQL are you using?\n> >\n>\n> Which part of the docs does this contradict?\n\nThe documentation to synchronous_commit = remote_apply explicitly states [1]:\n\n\"\"\"\nWhen set to remote_apply, commits will wait until replies from the\ncurrent synchronous standby(s) indicate they have received the commit\nrecord of the transaction and applied it, so that it has become\nvisible to queries on the standby(s), and also written to durable\nstorage on the standbys.\n\"\"\"\n\nHere commit on the master happened before receiving replies from the standby(s).\n\n[1]: https://www.postgresql.org/docs/13/runtime-config-wal.html#GUC-SYNCHRONOUS-COMMIT\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 21 Apr 2021 10:30:48 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": "On Tue, 2021-04-20 at 18:49 +0100, Ondřej Žižka wrote:\n> tecmint=# select * from a; --> LAN on sync replica is OK\n> id\n> ----\n> 1\n> (1 row)\n> \n> tecmint=# insert into a values (2); ---> LAN on sync replica is DOWN and \n> insert is waiting. During this time kill the background process on the \n> PostgreSQL server for this session\n> WARNING: canceling the wait for synchronous replication and terminating \n> connection due to administrator command\n> DETAIL: The transaction has already committed locally, but might not \n> have been replicated to the standby.\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Succeeded.\n> \n> tecmint=# select * from a;\n> id\n> ----\n> 1\n> 2\n> (2 rows)\n\nIt is well known that synchronous replication is sublect to that problem,\nsince it doesn't use the two-phase commit protocol.\n\nWhat surprises me is that this is a warning.\nIn my opinion it should be an error.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 21 Apr 2021 09:50:47 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": "st 21. 4. 2021 v 9:51 odesílatel Laurenz Albe <laurenz.albe@cybertec.at>\nnapsal:\n\n> On Tue, 2021-04-20 at 18:49 +0100, Ondřej Žižka wrote:\n> > tecmint=# select * from a; --> LAN on sync replica is OK\n> > id\n> > ----\n> > 1\n> > (1 row)\n> >\n> > tecmint=# insert into a values (2); ---> LAN on sync replica is DOWN and\n> > insert is waiting. During this time kill the background process on the\n> > PostgreSQL server for this session\n> > WARNING: canceling the wait for synchronous replication and terminating\n> > connection due to administrator command\n> > DETAIL: The transaction has already committed locally, but might not\n> > have been replicated to the standby.\n> > server closed the connection unexpectedly\n> > This probably means the server terminated abnormally\n> > before or while processing the request.\n> > The connection to the server was lost. Attempting reset: Succeeded.\n> >\n> > tecmint=# select * from a;\n> > id\n> > ----\n> > 1\n> > 2\n> > (2 rows)\n>\n> It is well known that synchronous replication is sublect to that problem,\n> since it doesn't use the two-phase commit protocol.\n>\n> What surprises me is that this is a warning.\n> In my opinion it should be an error.\n>\n\nyes, the an error has more sense\n\nRegards\n\nPavel\n\n\n> Yours,\n> Laurenz Albe\n>\n>\n>\n>\n\nst 21. 4. 2021 v 9:51 odesílatel Laurenz Albe <laurenz.albe@cybertec.at> napsal:On Tue, 2021-04-20 at 18:49 +0100, Ondřej Žižka wrote:\n> tecmint=# select * from a; --> LAN on sync replica is OK\n>   id\n> ----\n>    1\n> (1 row)\n> \n> tecmint=# insert into a values (2); ---> LAN on sync replica is DOWN and \n> insert is waiting. During this time kill the background process on the \n> PostgreSQL server for this session\n> WARNING:  canceling the wait for synchronous replication and terminating \n> connection due to administrator command\n> DETAIL:  The transaction has already committed locally, but might not \n> have been replicated to the standby.\n> server closed the connection unexpectedly\n>      This probably means the server terminated abnormally\n>      before or while processing the request.\n> The connection to the server was lost. Attempting reset: Succeeded.\n> \n> tecmint=# select * from a;\n>   id\n> ----\n>    1\n>    2\n> (2 rows)\n\nIt is well known that synchronous replication is sublect to that problem,\nsince it doesn't use the two-phase commit protocol.\n\nWhat surprises me is that this is a warning.\nIn my opinion it should be an error.yes, the an error has more sense RegardsPavel \n\nYours,\nLaurenz Albe", "msg_date": "Wed, 21 Apr 2021 09:54:10 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": ">\n> This can be an option for us in our case. But there also needs to be a\n> process how to detect these \"stuck commits\" and how to invalidate/remove\n> them, because in reality, if the app/user would not see the change in the\n> database, it/he/she will try to insert/delete it again. If it just stuck\n> without management, it will create a queue which can cause, that in the\n> queue there will be 2 similar inserts/deletes which can again cause issues\n> (like with the primary key I mentioned before).\n>\n\n This shouldn't be a problem as the previous transaction is still holding\nthe locks and the new transaction is blocked behind this. Outside of the\nsync replication, this can happen today too with glitches/timeouts/ retries\nbetween the client and the server. Am I missing something?\n\n\nSo the process should be in this case:\n>\n> - DBA receives information, that write operations stuck (DBA in\n> coordination with the infrastructure team disconnects all clients and\n> prevent new ones to create a new connection).\n>\nYou can monitor the pg_stat_activity for the SYNC_REP_WAIT_FLUSH wait types\nto detect this.\n\n\n> - DBA will recognize, that there is an issue in communication between the\n> primary and the sync replica (caused the issue with the propagation of\n> commits)\n> - DBA will see that there are some commits that are in the \"stuck state\"\n> - DBA removes these stuck commits. Note: Because the client never received\n> a confirmation about the successful commit -> changes in the DB client\n> tried to perform can't be considered as successful.\n>\n\nYou should consider these as in doubt transactions and the client should\nretry. Again, this can happen in a normal server crash case too. For\nexample, a transaction committed on the server and before sending the\nacknowledgement crashed. The client should know how to handle these cases.\n\n- DBA and infrastructure team restore the communication between server\n> nodes to be able to propagate commits from the primary node to sync replica.\n> - DBA and infrastructure team allows new connections to the database\n>\n> This approach would require external monitoring and alerting, but I would\n> say, that this is an acceptable solution. Would your patch be able to\n> perform that?\n>\nMy patch handles ignoring the cancel events. I ended up keeping the other\nlogic (blocking super user connections in the client_authentication_hook.\n\nThere is a third problem that I didn't talk about in this thread where the\nasync clients (including logical decoding and replication clients) can get\nahead of the new primary and there is no easier way to undo those changes.\nFor this problem, we need to implement some protocol in the WAL sender\nwhere it sends the log to the consumer only up to the flush LSN of the\nstandby/quorum replicas. This is something I am working on right now.\n\nThis can be an option for us in our case. But there also needs to be\n a process how to detect these \"stuck commits\" and how to\n invalidate/remove them, because in reality, if the app/user would\n not see the change in the database, it/he/she will try to\n insert/delete it again. If it just stuck without management, it will\n create a queue which can cause, that in the queue there will be 2\n similar inserts/deletes which can again cause issues (like with the\n primary key I mentioned before). This shouldn't be a problem as the previous transaction is still holding the locks and the new transaction is blocked behind this. Outside of the sync replication, this can happen today too with glitches/timeouts/ retries between the client and the server. Am I missing something?\n So the process should be in this case:\n\n - DBA receives information, that write operations stuck (DBA in\n coordination with the infrastructure team disconnects all clients\n and prevent new ones to create a new connection).You can monitor the pg_stat_activity for the SYNC_REP_WAIT_FLUSH wait types to detect this. \n - DBA will recognize, that there is an issue in communication\n between the primary and the sync replica (caused the issue with the\n propagation of commits)\n - DBA will see that there are some commits that are in the \"stuck\n state\"\n - DBA removes these stuck commits. Note: Because the client never\n received a confirmation about the successful commit -> changes in\n the DB client tried to perform can't be considered as successful.You should consider these as in doubt transactions and the client should retry. Again, this can happen in a normal server crash case too. For example, a transaction committed on the server and before sending the acknowledgement crashed.  The client should know how to handle these cases.\n - DBA and infrastructure team restore the communication between\n server nodes to be able to propagate commits from the primary node\n to sync replica.\n - DBA and infrastructure team allows new connections to the database\n\n This approach would require external monitoring and alerting, but I\n would say, that this is an acceptable solution. Would your patch be\n able to perform that?My patch handles ignoring the cancel events. I ended up keeping the other logic (blocking super user connections in the client_authentication_hook.There is a third problem that I didn't talk about in this thread where the async clients (including logical decoding and replication clients) can get ahead of the new primary and there is no easier way to undo those changes. For this problem, we need to implement some protocol in the WAL sender where it sends the log to the consumer only up to the flush LSN of the standby/quorum replicas. This is something I am working on right now.", "msg_date": "Wed, 21 Apr 2021 01:20:02 -0700", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": "Hello,\n\n > You can monitor the pg_stat_activity for the SYNC_REP_WAIT_FLUSH wait \ntypes to detect this.\n\nI tried to see this this wait_event_type Client or IPC and wait_event \nClient_Read or SyncRep. In which situation I can see the \nSYNC_REP_WAIT_FLUSH value?\n\n > You should consider these as in doubt transactions and the client \nshould retry. Again, this can happen in a normal server crash case too. \nFor example, a transaction committed on the server and before sending \nthe acknowledgement crashed.  *The client should know how to handle \nthese cases.*\n\nI have just a light knowledge of the in-doubt transaction. Need to study \nmore about it, but in real world the client is mostly 'stupid' and does \nexpect only COMMIT or ROLLBACK. Nothing between.\n\n > There is a third problem that I didn't talk about in this thread \nwhere the async clients (including logical decoding and replication \nclients) can get ahead of the new primary and there is no easier way to \nundo those changes. For this problem, we need to implement some protocol \nin the WAL sender where it sends the log to the consumer only up to the \nflush LSN of the standby/quorum replicas. This is something I am working \non right now.\n\nWe setup and architecture where are 4 nodes and Patroni as a cluster \nmanager. Two nodes are sync an each sync node has 1 async. In case \nsomething like this happen (e.g. network to sync replica fails and user \npress the CTRL+C), the async replica receives the transaction and apply \nit. If the outage is longer than some time (30s by default), management \nsoftware checks the LSN and create a new sync replica from the ASYNC \nreplica.\n\nOndrej\n\nYou should consider these as in doubt transactions and the client should \nretry. Again, this can happen in a normal server crash case too. For \nexample, a transaction committed on the server and before sending the \nacknowledgement crashed.  The client should know how to handle these cases\nOn 21/04/2021 09:20, SATYANARAYANA NARLAPURAM wrote:\n>\n> This can be an option for us in our case. But there also needs to\n> be a process how to detect these \"stuck commits\" and how to\n> invalidate/remove them, because in reality, if the app/user would\n> not see the change in the database, it/he/she will try to\n> insert/delete it again. If it just stuck without management, it\n> will create a queue which can cause, that in the queue there will\n> be 2 similar inserts/deletes which can again cause issues (like\n> with the primary key I mentioned before).\n>\n>\n>  This shouldn't be a problem as the previous transaction is still \n> holding the locks and the new transaction is blocked behind this. \n> Outside of the sync replication, this can happen today too with \n> glitches/timeouts/ retries between the client and the server. Am I \n> missing something?\n>\n>\n> So the process should be in this case:\n>\n> - DBA receives information, that write operations stuck (DBA in\n> coordination with the infrastructure team disconnects all clients\n> and prevent new ones to create a new connection).\n>\n> You can monitor the pg_stat_activity for the SYNC_REP_WAIT_FLUSH wait \n> types to detect this.\n>\n> - DBA will recognize, that there is an issue in communication\n> between the primary and the sync replica (caused the issue with\n> the propagation of commits)\n> - DBA will see that there are some commits that are in the \"stuck\n> state\"\n> - DBA removes these stuck commits. Note: Because the client never\n> received a confirmation about the successful commit -> changes in\n> the DB client tried to perform can't be considered as successful.\n>\n>\n> You should consider these as in doubt transactions and the client \n> should retry. Again, this can happen in a normal server crash case \n> too. For example, a transaction committed on the server and before \n> sending the acknowledgement crashed.  The client should know how to \n> handle these cases.\n>\n> - DBA and infrastructure team restore the communication between\n> server nodes to be able to propagate commits from the primary node\n> to sync replica.\n> - DBA and infrastructure team allows new connections to the database\n>\n> This approach would require external monitoring and alerting, but\n> I would say, that this is an acceptable solution. Would your patch\n> be able to perform that?\n>\n> My patch handles ignoring the cancel events. I ended up keeping the \n> other logic (blocking super user connections in the \n> client_authentication_hook.\n>\n> There is a third problem that I didn't talk about in this thread where \n> the async clients (including logical decoding and replication clients) \n> can get ahead of the new primary and there is no easier way to undo \n> those changes. For this problem, we need to implement some protocol in \n> the WAL sender where it sends the log to the consumer only up to the \n> flush LSN of the standby/quorum replicas. This is something I am \n> working on right now.\n>\n\n\n\n\n\n\nHello,\n> You can monitor the pg_stat_activity for\n the SYNC_REP_WAIT_FLUSH wait types to detect this.\nI tried to see this this wait_event_type Client or IPC and\n wait_event Client_Read or SyncRep. In which situation I can see\n the SYNC_REP_WAIT_FLUSH value?\n> You should consider these as in doubt transactions and\n the client should retry. Again, this can happen in a normal server\n crash case too. For example, a transaction committed on the server\n and before sending the acknowledgement crashed.  *The client\n should know how to handle these cases.*\nI have just a light knowledge of the in-doubt transaction. Need\n to study more about it, but in real world the client is mostly\n 'stupid' and does expect only COMMIT or ROLLBACK. Nothing between.\n\n\n\n> There is a third problem that I didn't talk about in this\n thread where the async clients (including logical decoding and\n replication clients) can get ahead of the new primary and there is\n no easier way to undo those changes. For this problem, we need to\n implement some protocol in the WAL sender where it sends the log\n to the consumer only up to the flush LSN of the standby/quorum\n replicas. This is something I am working on right now.\n\n\nWe setup and architecture where are 4 nodes and Patroni as a\n cluster manager. Two nodes are sync an each sync node has 1 async.\n In case something like this happen (e.g. network to sync replica\n fails and user press the CTRL+C), the async replica receives the\n transaction and apply it. If the outage is longer than some time\n (30s by default), management software checks the LSN and create a\n new sync replica from the ASYNC replica. \n\n\n\nOndrej\n\n\n\nYou should consider these as in doubt transactions and\n the client should retry. Again, this can happen in a normal server\n crash case too. For example, a transaction committed on the server\n and before sending the acknowledgement crashed.  The client should\n know how to handle these cases\nOn 21/04/2021 09:20, SATYANARAYANA\n NARLAPURAM wrote:\n\n\n\n\n\nThis can be an option for\n us in our case. But there also needs to be a process how to\n detect these \"stuck commits\" and how to invalidate/remove\n them, because in reality, if the app/user would not see the\n change in the database, it/he/she will try to insert/delete\n it again. If it just stuck without management, it will\n create a queue which can cause, that in the queue there will\n be 2 similar inserts/deletes which can again cause issues\n (like with the primary key I mentioned before).\n\n\n\n This shouldn't be a problem as the previous transaction\n is still holding the locks and the new transaction is\n blocked behind this. Outside of the sync replication, this\n can happen today too with glitches/timeouts/ retries between\n the client and the server. Am I missing something?\n\n\n\n\n\n So the process should be in this case:\n\n - DBA receives information, that write operations stuck\n (DBA in coordination with the infrastructure team\n disconnects all clients and prevent new ones to create a\n new connection).\n\n\nYou can monitor the pg_stat_activity for\n the SYNC_REP_WAIT_FLUSH wait types to detect this.\n \n\n\n - DBA will recognize, that there is an issue in\n communication between the primary and the sync replica\n (caused the issue with the propagation of commits)\n - DBA will see that there are some commits that are in the\n \"stuck state\"\n - DBA removes these stuck commits. Note: Because the\n client never received a confirmation about the successful\n commit -> changes in the DB client tried to perform\n can't be considered as successful.\n\n\n\n\nYou should consider these as in doubt transactions and\n the client should retry. Again, this can happen in a normal\n server crash case too. For example, a transaction committed\n on the server and before sending the acknowledgement\n crashed.  The client should know how to handle these cases.\n\n\n\n - DBA and infrastructure team restore the\n communication between server nodes to be able to propagate\n commits from the primary node to sync replica.\n - DBA and infrastructure team allows new connections to\n the database\n\n This approach would require external monitoring and\n alerting, but I would say, that this is an acceptable\n solution. Would your patch be able to perform that?\n\n\nMy patch handles ignoring the cancel events. I ended up\n keeping the other logic (blocking super user connections in\n the client_authentication_hook.\n\n\nThere is a third problem that I didn't talk about in this\n thread where the async clients (including logical decoding\n and replication clients) can get ahead of the new primary\n and there is no easier way to undo those changes. For this\n problem, we need to implement some protocol in the WAL\n sender where it sends the log to the consumer only up to the\n flush LSN of the standby/quorum replicas. This is something\n I am working on right now.", "msg_date": "Wed, 21 Apr 2021 20:03:16 +0100", "msg_from": "=?UTF-8?B?T25kxZllaiDFvWnFvmth?= <ondrej.zizka@stratox.cz>", "msg_from_op": true, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": "Hi Ondrej!\n\n> 19 апр. 2021 г., в 22:19, Ondřej Žižka <ondrej.zizka@stratox.cz> написал(а):\n> \n> Do you think, that would be possible to implement a process that would solve this use case?\n> Thank you\n> Ondrej\n> \n\nFeel free to review patch fixing this at [0]. It's classified as \"Server Features\", but I'm sure it's a bug fix.\n\nYandex.Cloud PG runs with this patch for more than half a year. Because we cannot afford loosing data in HA clusters.\n\nIt's somewhat incomplete solution, because PG restart or crash recovery will make waiting transactions visible. But we protect from this on HA tool's side.\n\nBest regards, Andrey Borodin.\n\n[0] https://commitfest.postgresql.org/33/2402/\n\n", "msg_date": "Thu, 22 Apr 2021 09:55:37 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": "Hello Andrey,\n\nI went through the thread for your patch and seems to me as an \nacceptable solution...\n\n > The only case patch does not handle is sudden backend crash - \nPostgres will recover without a restart.\n\nWe also use a HA tool (Patroni). If the whole machine fails, it will \nfind a new master and it should be OK. We use a 4 node setup (2 sync \nreplicas and 1 async from every replica). If there is an issue just with \nsync replica (async operated normally) and the master fails completely \nin this situation, it will be solved by Patroni (the async replica \nbecome another sync), but if it is just the backend process, the master \nwill not failover and changes will be still visible...\n\nIf the sync replica outage is temporal it will be solved itself when the \nnode will establish a replication slot again... If the outage is \"long\", \nPatroni will remove the \"old\" sync replica from the cluster and the \nasync replica reading from the master would be new sync. So yes... In 2 \nnode setup, this can be an issue, but in 4 node setup, this seems to me \nlike a solution.\nThe only situation I can imagine is a situation when the client \nconnections use a different network than the replication network and the \nreplication network would be down completely, but the client network \nwill be up. In that case, the master can be an \"isolated island\" and if \nit fails, we can lose the changed data.\nIs this situation also covered in your model: \"transaction effects \nshould not be observable on primary until requirements of \nsynchronous_commit are satisfied.\"\n\nDo you agree with my thoughts?\n\nMaybe would be possible to implement it into PostgreSQL with a note in \ndocumentation, that a multinode (>=3 nodes) cluster is necessary.\n\nRegards\nOndrej\n\nOn 22/04/2021 05:55, Andrey Borodin wrote:\n\n> Hi Ondrej!\n>\n>> 19 апр. 2021 г., в 22:19, Ondřej Žižka <ondrej.zizka@stratox.cz> написал(а):\n>>\n>> Do you think, that would be possible to implement a process that would solve this use case?\n>> Thank you\n>> Ondrej\n>>\n> Feel free to review patch fixing this at [0]. It's classified as \"Server Features\", but I'm sure it's a bug fix.\n>\n> Yandex.Cloud PG runs with this patch for more than half a year. Because we cannot afford loosing data in HA clusters.\n>\n> It's somewhat incomplete solution, because PG restart or crash recovery will make waiting transactions visible. But we protect from this on HA tool's side.\n>\n> Best regards, Andrey Borodin.\n>\n> [0] https://commitfest.postgresql.org/33/2402/\n\n\n", "msg_date": "Mon, 26 Apr 2021 18:01:02 +0100", "msg_from": "=?UTF-8?B?T25kxZllaiDFvWnFvmth?= <ondrej.zizka@stratox.cz>", "msg_from_op": true, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": "Thanks for reviewing Ondřej!\n\n> 26 апр. 2021 г., в 22:01, Ondřej Žižka <ondrej.zizka@stratox.cz> написал(а):\n> \n> Hello Andrey,\n> \n> I went through the thread for your patch and seems to me as an acceptable solution...\n> \n> > The only case patch does not handle is sudden backend crash - Postgres will recover without a restart.\n> \n> We also use a HA tool (Patroni). If the whole machine fails, it will find a new master and it should be OK. We use a 4 node setup (2 sync replicas and 1 async from every replica). If there is an issue just with sync replica (async operated normally) and the master fails completely in this situation, it will be solved by Patroni (the async replica become another sync), but if it is just the backend process, the master will not failover and changes will be still visible...\n> \n> If the sync replica outage is temporal it will be solved itself when the node will establish a replication slot again... If the outage is \"long\", Patroni will remove the \"old\" sync replica from the cluster and the async replica reading from the master would be new sync. So yes... In 2 node setup, this can be an issue, but in 4 node setup, this seems to me like a solution.\n> The only situation I can imagine is a situation when the client connections use a different network than the replication network and the replication network would be down completely, but the client network will be up. In that case, the master can be an \"isolated island\" and if it fails, we can lose the changed data.\nIt is, in fact, very common type of network partition.\n\n> Is this situation also covered in your model: \"transaction effects should not be observable on primary until requirements of synchronous_commit are satisfied.\"\nYes. If synchronous_commit_cancelation = off, no backend crash occurs and HA tool does not start PostgreSQL service when in doubt that other primary may exists.\n\n> Do you agree with my thoughts?\nI could not understand your reasoning about 2 and 4 nodes. Can you please clarify a bit how 4 node setup can help prevent visibility of commited-locall-but-canceled transactions?\n\nI do not think we can classify network partitions as \"temporal\" and \"long\". Due to the distributed nature of the system network partitions are eternal and momentary. Simultaneously. And if the node A can access node B and node C, this neither implies B can access C, nor B can access A.\n\n> Maybe would be possible to implement it into PostgreSQL with a note in documentation, that a multinode (>=3 nodes) cluster is necessary.\nPostgreSQL does not provide and fault detection and automatic failover. Documenting anything wrt failover is the responsibility of HA tool.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n\n\n\n", "msg_date": "Thu, 6 May 2021 10:09:30 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": "On 06/05/2021 06:09, Andrey Borodin wrote:\n> I could not understand your reasoning about 2 and 4 nodes. Can you please clarify a bit how 4 node setup can help prevent visibility of commited-locall-but-canceled transactions?\nHello Andrey,\n\nThe initial request (for us) was to have a geo cluster with 2 locations \nwhere would be possible to have 2 sync replicas even in case of failure \nof one location. This means to have 2 nodes in every location (4 \ntogether). If one location fails completely (broken network connection), \nPatroni will choose the working location (5 node etcd in 3 locations to \nensure this).\n\nIn the initial state, there is 1 sync replica in each location and one \nasync replica in each location using as a source the sync replica in its \nlocation.\nLet's have the following initial situation:\n1) Nodes pg11 and pg12 are in one location nodes pg21 and pg22 are in \nanother location.\n2) Nodes pg11 and pg21 are in sync replica\n3) Node pg12 is an async replica from pg11\n4) Node pg22 is an async replica from pg21\n5) Master is pg11.\n\nWhen the commited-locally-but-canceled situation happens and there is a \nproblem only with node pg21 (not with the network between nodes), the \nasync replica pg12 will receive the local commit from pg11 just after \nthe local commit on pg11 even if the cancellation happens. So there will \nbe a situation when the commit is present on both pg11 and pg12. If the \npg11 fails, the transaction already exists on pg12 and this node will be \nselected as a new leader (latest LSN).\n\nThere is a period between the time it is committed and the time it will \nhave been sent to the async replica when we can lose data, but I expect \nthis in milliseconds (maybe less).\n\nIt will not prevent visibility but will ensure, that the data would not \nbe lost and in that case, data can be visible on the leader even if they \nare not present on the sync replica because there is ensured the \ncontinuity of the data persistence in the async replica.\n\nI hope I explained it understandably.\n\nRegards\nOndrej\n\n\n\n", "msg_date": "Thu, 20 May 2021 16:40:36 +0100", "msg_from": "=?UTF-8?B?T25kxZllaiDFvWnFvmth?= <ondrej.zizka@stratox.cz>", "msg_from_op": true, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": "On Tue, 2021-04-20 at 14:19 -0700, SATYANARAYANA NARLAPURAM wrote:\n> One idea here is to make the backend ignore query\n> cancellation/backend termination while waiting for the synchronous\n> commit ACK. This way client never reads the data that was never\n> flushed remotely. The problem with this approach is that your\n> backends get stuck until your commit log record is flushed on the\n> remote side. Also, the client can see the data not flushed remotely\n> if the server crashes and comes back online. You can prevent the\n> latter case by making a SyncRepWaitForLSN before opening up the\n> connections to the non-superusers. I have a working prototype of this\n> logic, if there is enough interest I can post the patch.\n\nI didn't see a patch here yet, so I wrote a simple one for\nconsideration (attached).\n\nThe problem exists for both cancellation and termination requests. The\npatch adds a GUC that makes SyncRepWaitForLSN keep waiting. It does not\nignore the requests; for instance, a termination request will still be\nhonored when it's done waiting for sync rep.\n\nThe idea of this GUC is not to wait forever (obviously), but to allow\nthe administrator (or an automated network agent) to be in control of\nthe logic:\n\nIf the primary is non-responsive, the administrator can decide to fail\nover, knowing that all visible transactions on the primary are durable\non the standby (because any transaction that didn't make it to the\nstandby also didn't release locks yet). If the standby is non-\nresponsive, the administrator can intervene with something like:\n\n ALTER SYSTEM SET synchronous_standby_names = '';\n SELECT pg_reload_conf();\n\nwhich will disable sync rep, allowing the primary to complete the query\nand continue on without the standby; but in that case the admin must be\nsure not to fail over until there's a new standby fully caught-up.\n\nThe patch may be somewhat controversial, so I'll wait for feedback\nbefore documenting it properly.\n\nRegards,\n\tJeff Davis", "msg_date": "Mon, 28 Jun 2021 15:56:29 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": "\n\n> 29 июня 2021 г., в 03:56, Jeff Davis <pgsql@j-davis.com> написал(а):\n> \n> The patch may be somewhat controversial, so I'll wait for feedback\n> before documenting it properly.\n\nThe patch seems similar to [0]. But I like your wording :)\nI'd be happy if we go with any version of these idea.\n\nBest regards, Andrey Borodin.\n\n\n[0]https://commitfest.postgresql.org/33/2402/\n\n\n\n", "msg_date": "Tue, 29 Jun 2021 11:48:05 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": "On Tue, 2021-06-29 at 11:48 +0500, Andrey Borodin wrote:\n> > 29 июня 2021 г., в 03:56, Jeff Davis <pgsql@j-davis.com>\n> > написал(а):\n> > \n> > The patch may be somewhat controversial, so I'll wait for feedback\n> > before documenting it properly.\n> \n> The patch seems similar to [0]. But I like your wording :)\n> I'd be happy if we go with any version of these idea.\n\nThank you, somehow I missed that one, we should combine the CF entries.\n\nMy patch also covers the backend termination case. Is there a reason\nyou left that case out?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Tue, 29 Jun 2021 11:35:35 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": "\n\n> 29 июня 2021 г., в 23:35, Jeff Davis <pgsql@j-davis.com> написал(а):\n> \n> On Tue, 2021-06-29 at 11:48 +0500, Andrey Borodin wrote:\n>>> 29 июня 2021 г., в 03:56, Jeff Davis <pgsql@j-davis.com>\n>>> написал(а):\n>>> \n>>> The patch may be somewhat controversial, so I'll wait for feedback\n>>> before documenting it properly.\n>> \n>> The patch seems similar to [0]. But I like your wording :)\n>> I'd be happy if we go with any version of these idea.\n> \n> Thank you, somehow I missed that one, we should combine the CF entries.\n> \n> My patch also covers the backend termination case. Is there a reason\n> you left that case out?\nYes, backend termination is used by HA tool before rewinding the node. Initially I was considering termination as PANIC and got a ton of coredumps during failovers on drills.\n\nThere is one more caveat we need to fix: we should prevent instant recovery from happening. HA tool must know that our process was restarted. \nConsider following scenario:\n1. Node A is primary with sync rep.\n2. A is going through network partitioning, somewhere node B is promoted.\n3. All backends of A are stuck in sync rep, until HA tool discovers A is failed node.\n4. One backend crashes with segfault in some buggy extension or OOM or whatever\n5. Postgres server is doing restartless crash recovery making local-but-not-replicated data visible.\n\nWe should prevent 5 also as we prevent cancels. HA tool will discover postmaster fail and will recheck in coordinatino system that it can raise up Postgres locally.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Wed, 30 Jun 2021 17:28:28 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": "On Wed, 2021-06-30 at 17:28 +0500, Andrey Borodin wrote:\n> > My patch also covers the backend termination case. Is there a\n> > reason\n> > you left that case out?\n> \n> Yes, backend termination is used by HA tool before rewinding the\n> node.\n\nCan't you just disable sync rep first (using ALTER SYSTEM SET\nsynchronous_standby_names=''), which will unstick the backend, and then\nterminate it?\n\nIf you don't handle the termination case, then there's still a chance\nfor the transaction to become visible to other clients before its\nreplicated.\n\n> There is one more caveat we need to fix: we should prevent instant\n> recovery from happening.\n\nThat can already be done with the restart_after_crash GUC.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 01 Jul 2021 22:59:47 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": "\n\n> 2 июля 2021 г., в 10:59, Jeff Davis <pgsql@j-davis.com> написал(а):\n> \n> On Wed, 2021-06-30 at 17:28 +0500, Andrey Borodin wrote:\n>>> My patch also covers the backend termination case. Is there a\n>>> reason\n>>> you left that case out?\n>> \n>> Yes, backend termination is used by HA tool before rewinding the\n>> node.\n> \n> Can't you just disable sync rep first (using ALTER SYSTEM SET\n> synchronous_standby_names=''), which will unstick the backend, and then\n> terminate it?\nIf the failover happens due to unresponsive node we cannot just turn off sync rep. We need to have some spare connections for that (number of stuck backends will skyrocket during network partitioning). We need available descriptors and some memory to fork new backend. We will need to re-read config. We need time to try after all.\nAt some failures we may lack some of these.\n\nPartial degradation is already hard task. Without ability to easily terminate running Postgres HA tool will often resort to SIGKILL.\n\n> \n> If you don't handle the termination case, then there's still a chance\n> for the transaction to become visible to other clients before its\n> replicated.\nTermination is admin command, they know what they are doing.\nCancelation is part of user protocol.\n\nBTW can we have two GUCs? So that HA tool developers will decide on their own which guaranties they provide?\n\n> \n>> There is one more caveat we need to fix: we should prevent instant\n>> recovery from happening.\n> \n> That can already be done with the restart_after_crash GUC.\n\nOh, I didn't know it, we will use it. Thanks!\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Fri, 2 Jul 2021 11:39:39 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": "On Fri, 2021-07-02 at 11:39 +0500, Andrey Borodin wrote:\n> If the failover happens due to unresponsive node we cannot just turn\n> off sync rep. We need to have some spare connections for that (number\n> of stuck backends will skyrocket during network partitioning). We\n> need available descriptors and some memory to fork new backend. We\n> will need to re-read config. We need time to try after all.\n> At some failures we may lack some of these.\n\nI think it's a good point that, when things start to go wrong, they can\ngo very wrong very quickly.\n\nBut until you've disabled sync rep, the primary will essentially be\ndown for writes whether using this new feature or not. Even if you can\nterminate some backends to try to free space, the application will just\nmake new connections that will get stuck the same way.\n\nYou can avoid the \"fork backend\" problem by keeping a connection always\nopen from the HA tool, or by editing the conf to disable sync rep and\nissuing SIGHUP instead. Granted, that still takes some memory.\n\n> Partial degradation is already hard task. Without ability to easily\n> terminate running Postgres HA tool will often resort to SIGKILL.\n\nWhen the system is really wedged as you describe (waiting on sync rep,\ntons of connections, and low memory), what information do you expect\nthe HA tool to be able to collect, and what actions do you expect it to\ntake?\n\nPresumably, you'd want it to disable sync rep at some point to get back\nonline. Where does SIGTERM fit into the picture?\n\n> > If you don't handle the termination case, then there's still a\n> > chance\n> > for the transaction to become visible to other clients before its\n> > replicated.\n> \n> Termination is admin command, they know what they are doing.\n> Cancelation is part of user protocol.\n\n From the pg_terminate_backend() docs: \"This is also allowed if the\ncalling role is a member of the role whose backend is being terminated\nor the calling role has been granted pg_signal_backend\", so it's not\nreally an admin command. Even for an admin, it might be hard to\nunderstand why terminating a backend could result in losing a visible\ntransaction.\n\nI'm not really seeing two use cases here for two GUCs. Are you sure you\nwant to disable only cancels but allow termination to proceed?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 02 Jul 2021 13:15:10 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": "\n\n> 3 июля 2021 г., в 01:15, Jeff Davis <pgsql@j-davis.com> написал(а):\n> \n> On Fri, 2021-07-02 at 11:39 +0500, Andrey Borodin wrote:\n>> If the failover happens due to unresponsive node we cannot just turn\n>> off sync rep. We need to have some spare connections for that (number\n>> of stuck backends will skyrocket during network partitioning). We\n>> need available descriptors and some memory to fork new backend. We\n>> will need to re-read config. We need time to try after all.\n>> At some failures we may lack some of these.\n> \n> I think it's a good point that, when things start to go wrong, they can\n> go very wrong very quickly.\n> \n> But until you've disabled sync rep, the primary will essentially be\n> down for writes whether using this new feature or not. Even if you can\n> terminate some backends to try to free space, the application will just\n> make new connections that will get stuck the same way.\nSurely I'm talking about terminating postmaster, not individual backends. But postmaster will need to terminate each running query.\nWe surely need to have a way to stop whole instance without making any single query. And I do not like kill -9 for this purpose.\n\n> \n> You can avoid the \"fork backend\" problem by keeping a connection always\n> open from the HA tool, or by editing the conf to disable sync rep and\n> issuing SIGHUP instead. Granted, that still takes some memory.\n> \n>> Partial degradation is already hard task. Without ability to easily\n>> terminate running Postgres HA tool will often resort to SIGKILL.\n> \n> When the system is really wedged as you describe (waiting on sync rep,\n> tons of connections, and low memory), what information do you expect\n> the HA tool to be able to collect, and what actions do you expect it to\n> take?\nHA tool is not going to collect anything. It just calls pg_ctl stop [0] or it's equivalent.\n\n> \n> Presumably, you'd want it to disable sync rep at some point to get back\n> online. Where does SIGTERM fit into the picture?\n\nHA tool is going to terminate running instance, rewind it, switch to new timeline and enroll into cluster again as standby.\n\n> \n>>> If you don't handle the termination case, then there's still a\n>>> chance\n>>> for the transaction to become visible to other clients before its\n>>> replicated.\n>> \n>> Termination is admin command, they know what they are doing.\n>> Cancelation is part of user protocol.\n> \n> From the pg_terminate_backend() docs: \"This is also allowed if the\n> calling role is a member of the role whose backend is being terminated\n> or the calling role has been granted pg_signal_backend\", so it's not\n> really an admin command. Even for an admin, it might be hard to\n> understand why terminating a backend could result in losing a visible\n> transaction.\nOk, I see backend termination is not described as admin command.\nWe cannot prevent user from doing stupid things, they are able to delete their data anyway.\n\n> I'm not really seeing two use cases here for two GUCs. Are you sure you\n> want to disable only cancels but allow termination to proceed?\n\nYes, I'm sure. I had been running production with disabled termination for some weeks. cluster reparation was much slower. For some reason kill-9-ed instances were successfully rewound much less often. But maybe I've done something wrong.\n\nIf we can stop whole instance the same way as we did without activating proposed GUC - there is no any problem.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n[0] https://github.com/zalando/patroni/blob/master/patroni/postgresql/postmaster.py#L155\n\n", "msg_date": "Sat, 3 Jul 2021 14:06:24 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": "On Sat, 2021-07-03 at 14:06 +0500, Andrey Borodin wrote:\n> > But until you've disabled sync rep, the primary will essentially be\n> > down for writes whether using this new feature or not. Even if you\n> > can\n> > terminate some backends to try to free space, the application will\n> > just\n> > make new connections that will get stuck the same way.\n> \n> Surely I'm talking about terminating postmaster, not individual\n> backends. But postmaster will need to terminate each running query.\n> We surely need to have a way to stop whole instance without making\n> any single query. And I do not like kill -9 for this purpose.\n\nkill -6 would suffice.\n\nI see the point that you don't want this to interfere with an\nadministrative shutdown. But it seems like most shutdowns will need to\nescalate to SIGABRT for cases where things are going badly wrong (low\nmemory, etc.) anyway. I don't see a better solution here.\n\nI don't fully understand why you'd be concerned about cancellation but\nnot concerned about similar problems with termination, but if you think\ntwo GUCs are important I can do that.\n\nRegards,\n\tJeff Davis\n\n\n\n\n\n", "msg_date": "Sat, 03 Jul 2021 11:44:20 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": "\n\n> 3 июля 2021 г., в 23:44, Jeff Davis <pgsql@j-davis.com> написал(а):\n> \n> On Sat, 2021-07-03 at 14:06 +0500, Andrey Borodin wrote:\n>>> But until you've disabled sync rep, the primary will essentially be\n>>> down for writes whether using this new feature or not. Even if you\n>>> can\n>>> terminate some backends to try to free space, the application will\n>>> just\n>>> make new connections that will get stuck the same way.\n>> \n>> Surely I'm talking about terminating postmaster, not individual\n>> backends. But postmaster will need to terminate each running query.\n>> We surely need to have a way to stop whole instance without making\n>> any single query. And I do not like kill -9 for this purpose.\n> \n> kill -6 would suffice.\nSIGABRT is expected to generate a core dump, isn't it? Node failover is somewhat expected state in HA system.\n\n> \n> I see the point that you don't want this to interfere with an\n> administrative shutdown. But it seems like most shutdowns will need to\n> escalate to SIGABRT for cases where things are going badly wrong (low\n> memory, etc.) anyway. I don't see a better solution here.\nIn my experience SIGTERM coped fine so far.\n\n> I don't fully understand why you'd be concerned about cancellation but\n> not concerned about similar problems with termination, but if you think\n> two GUCs are important I can do that.\nI think 2 GUCs is a better solution than 1 GUC disabling both cancelation and termination.\nIt would be great if some other HA tool developers would chime in.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Fri, 9 Jul 2021 23:10:20 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Synchronous commit behavior during network outage" }, { "msg_contents": "On Fri, 2021-07-09 at 23:10 +0500, Andrey Borodin wrote:\n> In my experience SIGTERM coped fine so far.\n\nOK. I don't think ignoring SIGTERM in the way my patch does it is a\ngreat solution, and it's not getting much support, so I think I'll back\naway from that idea.\n\nI had a separate discussion with Andres, and he made a distinction\nbetween explicit vs. implicit actions. For instance, an explicit\nSIGTERM or SIGINT should not be ignored (or the functions that cause\nthose to happen); but if we are waiting for sync rep then it might be\nOK to ignore a cancel caused by statement_timeout or a termination due\nto a network disconnect.\n\nSeparately, I'm taking a vacation. Since there are two versions of the\npatch floating around, I will withdraw mine.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 12 Jul 2021 19:22:14 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Synchronous commit behavior during network outage" } ]
[ { "msg_contents": "Hi,\n\nI've noticed that customers not infrequently complain that they start\npostgres and then the system doesn't come up for a while and they have\nno idea what's going on and are (understandably) worried. There are\nprobably a number of reasons why this can happen, but the ones that\nseem to come up most often in my experience are (1) SyncDataDirectory\ntakes a long time, (b) ResetUnloggedRelations takes a long time, and\n(c) there's a lot of WAL to apply so that takes a long time. It's\npossible to distinguish this last case from the other two by looking\nat the output of 'ps', but that's not super-convenient if your normal\nmethod of access to the server is via libpq, and it only works if you\nare monitoring it as it's happening rather than looking at the logs\nafter-the-fact. I am not sure there's any real way to distinguish the\nother two cases without using strace or gdb or similar.\n\nIt seems to me that we could do better. One approach would be to try\nto issue a log message periodically - maybe once per minute, or some\nconfigurable interval, e.g. perhaps add messages something like this:\n\nLOG: still syncing data directory, elapsed time %ld.%03d ms, current path %s\nLOG: data directory sync complete after %ld.%03d ms\nLOG: still resetting unlogged relations, elapsed time %ld.%03d ms,\ncurrent path %s\nLOG: unlogged relations reset after %ld.%03d ms\nLOG: still performing crash recovery, elapsed time %ld.%03d ms,\ncurrent LSN %08X/%08X\n\nWe already have a message when redo is complete, so there's no need\nfor another one. The implementation here doesn't seem too hard either:\nthe startup process would set a timer, when the timer expires the\nsignal handler sets a flag, at a convenient point we notice the flag\nis set and responding by printing a message and clearing the flag.\n\nAnother possible approach would be to accept connections for\nmonitoring purposes even during crash recovery. We can't allow access\nto any database at that point, since the system might not be\nconsistent, but we could allow something like a replication connection\n(the non-database-associated variant). Maybe it would be precisely a\nreplication connection and we'd just refuse all but a subset of\ncommands, or maybe it would be some other kinds of thing. But either\nway you'd be able to issue a command in some mini-language saying \"so,\ntell me how startup is going\" and it would reply with a result set of\nsome kind.\n\nIf I had to pick one of these two ideas, I'd pick the one the\nlog-based solution, since it seems easier to access and simplifies\nretrospective analysis, but I suspect SQL access would be quite useful\nfor some users too, especially in cloud environments where \"just log\ninto the machine and have a look\" is not an option.\n\nThoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 19 Apr 2021 13:55:13 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "when the startup process doesn't" }, { "msg_contents": "On 2021-Apr-19, Robert Haas wrote:\n\n> Another possible approach would be to accept connections for\n> monitoring purposes even during crash recovery. We can't allow access\n> to any database at that point, since the system might not be\n> consistent, but we could allow something like a replication connection\n> (the non-database-associated variant).\n\nHmm. We already have pg_isready, which is pretty simplistic -- it tries\nto connect to the server and derive a status in a very simplistic way.\nCan we perhaps improve on that? I think your idea of using the\nnon-database-connected replication mode would let the server return a\ntuple with some status information with a new command. And then\npg_isready could interpret that, or just print it.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\nSubversion to GIT: the shortest path to happiness I've ever heard of\n (Alexey Klyukin)\n\n\n", "msg_date": "Mon, 19 Apr 2021 19:16:37 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "On Mon, Apr 19, 2021 at 01:55:13PM -0400, Robert Haas wrote:\n> I've noticed that customers not infrequently complain that they start\n> postgres and then the system doesn't come up for a while and they have\n> no idea what's going on and are (understandably) worried. There are\n> probably a number of reasons why this can happen, but the ones that\n> seem to come up most often in my experience are (1) SyncDataDirectory\n> takes a long time, (b) ResetUnloggedRelations takes a long time, and\n> (c) there's a lot of WAL to apply so that takes a long time. It's\n> possible to distinguish this last case from the other two by looking\n> at the output of 'ps', but that's not super-convenient if your normal\n> method of access to the server is via libpq, and it only works if you\n> are monitoring it as it's happening rather than looking at the logs\n> after-the-fact. I am not sure there's any real way to distinguish the\n> other two cases without using strace or gdb or similar.\n> \n> It seems to me that we could do better. One approach would be to try\n> to issue a log message periodically - maybe once per minute, or some\n> configurable interval, e.g. perhaps add messages something like this:\n\nYes, this certainly needs improvement.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 19 Apr 2021 20:30:19 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "On Tue, Apr 20, 2021 at 5:55 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> If I had to pick one of these two ideas, I'd pick the one the\n> log-based solution, since it seems easier to access and simplifies\n> retrospective analysis, but I suspect SQL access would be quite useful\n> for some users too, especially in cloud environments where \"just log\n> into the machine and have a look\" is not an option.\n\n+1 for both ideas. I've heard multiple requests for something like\nthat. A couple of users with update_process_title=off told me they\nregretted that choice when they found themselves running a long crash\nrecovery with the only indicator of progress disabled.\n\n\n", "msg_date": "Tue, 20 Apr 2021 12:38:11 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Tue, Apr 20, 2021 at 5:55 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> If I had to pick one of these two ideas, I'd pick the one the\n>> log-based solution, since it seems easier to access and simplifies\n>> retrospective analysis, but I suspect SQL access would be quite useful\n>> for some users too, especially in cloud environments where \"just log\n>> into the machine and have a look\" is not an option.\n\n> +1 for both ideas. I've heard multiple requests for something like\n> that. A couple of users with update_process_title=off told me they\n> regretted that choice when they found themselves running a long crash\n> recovery with the only indicator of progress disabled.\n\nHmm ... +1 for progress messages in the log, but I'm suspicious about\nthe complexity-and-fragility-versus-value tradeoff for the other thing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Apr 2021 20:44:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "On Mon, Apr 19, 2021 at 7:55 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> Hi,\n>\n> I've noticed that customers not infrequently complain that they start\n> postgres and then the system doesn't come up for a while and they have\n> no idea what's going on and are (understandably) worried. There are\n> probably a number of reasons why this can happen, but the ones that\n> seem to come up most often in my experience are (1) SyncDataDirectory\n> takes a long time, (b) ResetUnloggedRelations takes a long time, and\n> (c) there's a lot of WAL to apply so that takes a long time. It's\n> possible to distinguish this last case from the other two by looking\n> at the output of 'ps', but that's not super-convenient if your normal\n> method of access to the server is via libpq, and it only works if you\n> are monitoring it as it's happening rather than looking at the logs\n> after-the-fact. I am not sure there's any real way to distinguish the\n> other two cases without using strace or gdb or similar.\n>\n> It seems to me that we could do better. One approach would be to try\n> to issue a log message periodically - maybe once per minute, or some\n> configurable interval, e.g. perhaps add messages something like this:\n>\n> LOG: still syncing data directory, elapsed time %ld.%03d ms, current path %s\n> LOG: data directory sync complete after %ld.%03d ms\n> LOG: still resetting unlogged relations, elapsed time %ld.%03d ms,\n> current path %s\n> LOG: unlogged relations reset after %ld.%03d ms\n> LOG: still performing crash recovery, elapsed time %ld.%03d ms,\n> current LSN %08X/%08X\n>\n> We already have a message when redo is complete, so there's no need\n> for another one. The implementation here doesn't seem too hard either:\n> the startup process would set a timer, when the timer expires the\n> signal handler sets a flag, at a convenient point we notice the flag\n> is set and responding by printing a message and clearing the flag.\n>\n> Another possible approach would be to accept connections for\n> monitoring purposes even during crash recovery. We can't allow access\n> to any database at that point, since the system might not be\n> consistent, but we could allow something like a replication connection\n> (the non-database-associated variant). Maybe it would be precisely a\n> replication connection and we'd just refuse all but a subset of\n> commands, or maybe it would be some other kinds of thing. But either\n> way you'd be able to issue a command in some mini-language saying \"so,\n> tell me how startup is going\" and it would reply with a result set of\n> some kind.\n>\n> If I had to pick one of these two ideas, I'd pick the one the\n> log-based solution, since it seems easier to access and simplifies\n> retrospective analysis, but I suspect SQL access would be quite useful\n> for some users too, especially in cloud environments where \"just log\n> into the machine and have a look\" is not an option.\n>\n> Thoughts?\n\n(Ugh. Did reply instead of reply-all. Surely I should know that by\nnow... Here's a re-send!)\n\n+1 for the log based one.\n\nIn general I'm usually against the log based one, but something over\nthe replication protocol is really not going to help a lot of people\nwho are in this situation. They may not even have permissions to log\nin, and any kind of monitoring system would fail to work as well. And\ncan we even log users in at this point? We can't get the list of\nroles... If we could, I would say it's probably better to allow the\nlogin in a regular connection, but then immediately throw an error and\ngive this error a more detailed message if the user has monitoring\npermissions.\n\nBut against either of those, the log based method is certainly a lot\neasier to build :)\n\nAnd FWIW, I believe most -- probably all -- cloud environments do give\nan interface to view the log at least, so the log based solution would\nwork there as well. Maybe not as convenient, but it would work.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Tue, 20 Apr 2021 14:22:52 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "On Mon, Apr 19, 2021 at 8:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hmm ... +1 for progress messages in the log, but I'm suspicious about\n> the complexity-and-fragility-versus-value tradeoff for the other thing.\n\nAll right, so it's nice to hear that nobody so far is opposed to the\nlog-based solution, and I think it's sensible to think about building\nthat one first and doing anything else later.\n\nBut, if we did want to invent something to allow monitoring via libpq\neven at this early stage, how would we make it work? Magnus pointed\nout that we can hardly read pg_authid during crash recovery, which\nmeans that accepting logins in the usual sense at that stage is not\nfeasible. But, what if we picked a fixed, hard-coded role name for\nthis? I would suggest pg_monitor, but that's already taken for\nsomething else, so maybe pg_console or some better thing someone else\ncan suggest. Without a pg_authid row, you couldn't use password, md5,\nor scram authentication, unless we provided some other place to store\nthe verifier, like a flatfile. I'm not sure we want to go there, but\nthat still leaves a lot of workable authentication methods.\n\nI think Álvaro is right to see this kind of work as an extension of\npg_isready, but the problem with pg_isready is that we don't want to\nexpose a lot of information to the whole Internet, or however much of\nit can reach the postgres port. But with this approach, you can lock\ndown access via pg_hba.conf, which means that it's OK to expose\ninformation that we don't want to make available to everyone. I think\nwe're still limited to exposing what can be observed from shared\nmemory here, because the whole idea is to have something that can be\nused even before consistency is reached, so we shouldn't really be\ndoing anything that would look at the contents of data files. But that\nstill leaves a bunch of things that we could show here, the progress\nof the startup process being one of them.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 20 Apr 2021 08:43:27 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "On Tue, Apr 20, 2021 at 2:43 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Apr 19, 2021 at 8:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Hmm ... +1 for progress messages in the log, but I'm suspicious about\n> > the complexity-and-fragility-versus-value tradeoff for the other thing.\n>\n> All right, so it's nice to hear that nobody so far is opposed to the\n> log-based solution, and I think it's sensible to think about building\n> that one first and doing anything else later.\n>\n> But, if we did want to invent something to allow monitoring via libpq\n> even at this early stage, how would we make it work? Magnus pointed\n> out that we can hardly read pg_authid during crash recovery, which\n> means that accepting logins in the usual sense at that stage is not\n> feasible. But, what if we picked a fixed, hard-coded role name for\n> this? I would suggest pg_monitor, but that's already taken for\n> something else, so maybe pg_console or some better thing someone else\n> can suggest. Without a pg_authid row, you couldn't use password, md5,\n> or scram authentication, unless we provided some other place to store\n> the verifier, like a flatfile. I'm not sure we want to go there, but\n> that still leaves a lot of workable authentication methods.\n\nAnother option would be to keep this check entirely outside the scope\nof normal roles, and just listen on a port (or unix socket) during\nstartup which basically just replies with the current status if you\nconnect to it. On Unix this could also make use of peer authentication\nrequiring you to be the same user as postgres for example.\n\n\n> I think Álvaro is right to see this kind of work as an extension of\n> pg_isready, but the problem with pg_isready is that we don't want to\n> expose a lot of information to the whole Internet, or however much of\n> it can reach the postgres port. But with this approach, you can lock\n> down access via pg_hba.conf, which means that it's OK to expose\n> information that we don't want to make available to everyone. I think\n> we're still limited to exposing what can be observed from shared\n> memory here, because the whole idea is to have something that can be\n> used even before consistency is reached, so we shouldn't really be\n> doing anything that would look at the contents of data files. But that\n> still leaves a bunch of things that we could show here, the progress\n> of the startup process being one of them.\n\nYeah, I think we should definitely limit this to local access, one way\nor another. Realistically using pg_hba is going to require catalog\naccess, isn't it? And we can't just go ignore those rows in pg_hba\nthat for example references role membership (as well as all those auth\nmethods you can't use).\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Tue, 20 Apr 2021 15:04:28 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "On Tue, 20 Apr 2021 15:04:28 +0200\nMagnus Hagander <magnus@hagander.net> wrote:\n[...]\n> Yeah, I think we should definitely limit this to local access, one way\n> or another. Realistically using pg_hba is going to require catalog\n> access, isn't it? And we can't just go ignore those rows in pg_hba\n> that for example references role membership (as well as all those auth\n> methods you can't use).\n\nTwo another options:\n\n1. if this is limited to local access only, outside of the log entries, the\nstatus of the startup could be updated in the controldata file as well. This\nwould allows to watch it without tail-grep'ing logs using eg. pg_controldata.\n\n2. maybe the startup process could ignore update_process_title? As far\nas I understand the doc correctly, this GUC is mostly useful for backends on\nWindows.\n\nRegards,\n\n\n", "msg_date": "Tue, 20 Apr 2021 17:17:20 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "On Tue, Apr 20, 2021 at 5:17 PM Jehan-Guillaume de Rorthais\n<jgdr@dalibo.com> wrote:\n>\n> On Tue, 20 Apr 2021 15:04:28 +0200\n> Magnus Hagander <magnus@hagander.net> wrote:\n> [...]\n> > Yeah, I think we should definitely limit this to local access, one way\n> > or another. Realistically using pg_hba is going to require catalog\n> > access, isn't it? And we can't just go ignore those rows in pg_hba\n> > that for example references role membership (as well as all those auth\n> > methods you can't use).\n>\n> Two another options:\n>\n> 1. if this is limited to local access only, outside of the log entries, the\n> status of the startup could be updated in the controldata file as well. This\n> would allows to watch it without tail-grep'ing logs using eg. pg_controldata.\n\nI think doing so in controldata would definitely make things\ncomplicated for no real reason. Plus controldata has a fixed size (and\nhas to have), whereas something like this would probably want more\nvariation than that makes easy.\n\nThere could be a \"startup.status\" file I guess which would basically\ncontain the last line of what would otherwise be in the log. But if it\nremains a textfile, I'm not sure what the gain is -- you'll just have\nto have the dba look in more places than one to find it? It's not like\nthere's likely to be much other data written to the log during these\ntimes?\n\n\n> 2. maybe the startup process could ignore update_process_title? As far\n> as I understand the doc correctly, this GUC is mostly useful for backends on\n> Windows.\n\nYou mention Windows -- that would be one excellent reason not to go\nfor this particular method. Viewing the process title is much harder\non Windows, as there is actually no such thing and we fake it.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Tue, 20 Apr 2021 19:32:33 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Tue, Apr 20, 2021 at 5:17 PM Jehan-Guillaume de Rorthais\n> <jgdr@dalibo.com> wrote:\n>> Two another options:\n>> 1. if this is limited to local access only, outside of the log entries, the\n>> status of the startup could be updated in the controldata file as well. This\n>> would allows to watch it without tail-grep'ing logs using eg. pg_controldata.\n\n> I think doing so in controldata would definitely make things\n> complicated for no real reason. Plus controldata has a fixed size (and\n> has to have), whereas something like this would probably want more\n> variation than that makes easy.\n\nAlso, given that pg_control is as critical a bit of data as we have,\nwe really don't want to be writing it more often than we absolutely\nhave to.\n\n> There could be a \"startup.status\" file I guess which would basically\n> contain the last line of what would otherwise be in the log. But if it\n> remains a textfile, I'm not sure what the gain is -- you'll just have\n> to have the dba look in more places than one to find it? It's not like\n> there's likely to be much other data written to the log during these\n> times?\n\nYeah, once you are talking about dumping stuff in a file, it's not\nclear how that's better than progress-messages-in-the-log. People\nalready have a lot of tooling for looking at the postmaster log.\n\nI think the point of Robert's other proposal is to allow remote\nchecks of the restart's progress, so local files aren't much of\na substitute anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 20 Apr 2021 14:23:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Magnus Hagander <magnus@hagander.net> writes:\n> > On Tue, Apr 20, 2021 at 5:17 PM Jehan-Guillaume de Rorthais\n> > <jgdr@dalibo.com> wrote:\n> >> Two another options:\n> >> 1. if this is limited to local access only, outside of the log entries, the\n> >> status of the startup could be updated in the controldata file as well. This\n> >> would allows to watch it without tail-grep'ing logs using eg. pg_controldata.\n> \n> > I think doing so in controldata would definitely make things\n> > complicated for no real reason. Plus controldata has a fixed size (and\n> > has to have), whereas something like this would probably want more\n> > variation than that makes easy.\n> \n> Also, given that pg_control is as critical a bit of data as we have,\n> we really don't want to be writing it more often than we absolutely\n> have to.\n\nYeah, don't think pg_control fiddling is what we want. I do agree with\nimproving the logging situation around here, certainly.\n\n> > There could be a \"startup.status\" file I guess which would basically\n> > contain the last line of what would otherwise be in the log. But if it\n> > remains a textfile, I'm not sure what the gain is -- you'll just have\n> > to have the dba look in more places than one to find it? It's not like\n> > there's likely to be much other data written to the log during these\n> > times?\n> \n> Yeah, once you are talking about dumping stuff in a file, it's not\n> clear how that's better than progress-messages-in-the-log. People\n> already have a lot of tooling for looking at the postmaster log.\n\nAgreed.\n\n> I think the point of Robert's other proposal is to allow remote\n> checks of the restart's progress, so local files aren't much of\n> a substitute anyway.\n\nYeah, being able to pick up on this remotely seems like it'd be quite\nnice. I'm not really thrilled with the idea, but the best I've got\noffhand for this would be a new role that's \"pg_recovery_login\" where an\nadmin can GRANT that role to the roles they'd like to be able to use to\nlogin during the recovery process and then, for those roles, we write\nout flat files to allow authentication without access to pg_authid,\nwhenever their password or such changes. It's certainly a bit grotty\nbut I do think it'd work. I definitely don't want to go back to having\nall of pg_authid written as a flat file and I'd rather that existing\ntools and libraries work with this (meaning using the same port and\nspeaking the PG protocol and such) rather than inventing some new thing\nthat listens on some other port, etc.\n\nOn the fence about tying this to 'pg_monitor' instead of using a new\npredefined role. Either way, I would definitely prefer to see the admin\nhave to create a role and then GRANT the predefined role to that role.\nI really dislike the idea of having predefined roles that can be used to\ndirectly log into the database.\n\nThanks,\n\nStephen", "msg_date": "Tue, 20 Apr 2021 14:51:50 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> Yeah, being able to pick up on this remotely seems like it'd be quite\n> nice. I'm not really thrilled with the idea, but the best I've got\n> offhand for this would be a new role that's \"pg_recovery_login\" where an\n> admin can GRANT that role to the roles they'd like to be able to use to\n> login during the recovery process and then, for those roles, we write\n> out flat files to allow authentication without access to pg_authid,\n\nWe got rid of those flat files for good and sufficient reasons. I really\nreally don't want to go back to having such.\n\nI wonder though whether we really need authentication here. pg_ping\nalready exposes whether the database is up, to anyone who can reach the\npostmaster port at all. Would it be so horrible if the \"can't accept\nconnections\" error message included a detail about \"recovery is X%\ndone\"?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 20 Apr 2021 14:56:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > Yeah, being able to pick up on this remotely seems like it'd be quite\n> > nice. I'm not really thrilled with the idea, but the best I've got\n> > offhand for this would be a new role that's \"pg_recovery_login\" where an\n> > admin can GRANT that role to the roles they'd like to be able to use to\n> > login during the recovery process and then, for those roles, we write\n> > out flat files to allow authentication without access to pg_authid,\n> \n> We got rid of those flat files for good and sufficient reasons. I really\n> really don't want to go back to having such.\n\nYeah, certainly is part of the reason that I didn't really like that\nidea either.\n\n> I wonder though whether we really need authentication here. pg_ping\n> already exposes whether the database is up, to anyone who can reach the\n> postmaster port at all. Would it be so horrible if the \"can't accept\n> connections\" error message included a detail about \"recovery is X%\n> done\"?\n\nUltimately it seems like it would depend on exactly what we are thinking\nof returning there. A simple percentage of recovery which has been\ncompleted doesn't seem like it'd really be revealing too much\ninformation though.\n\nThanks,\n\nStephen", "msg_date": "Tue, 20 Apr 2021 15:04:59 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "Hi,\n\nOn 2021-04-19 13:55:13 -0400, Robert Haas wrote:\n> Another possible approach would be to accept connections for\n> monitoring purposes even during crash recovery. We can't allow access\n> to any database at that point, since the system might not be\n> consistent, but we could allow something like a replication connection\n> (the non-database-associated variant). Maybe it would be precisely a\n> replication connection and we'd just refuse all but a subset of\n> commands, or maybe it would be some other kinds of thing. But either\n> way you'd be able to issue a command in some mini-language saying \"so,\n> tell me how startup is going\" and it would reply with a result set of\n> some kind.\n\nThe hard part about this seems to be how to perform authentication -\nobviously we can't do catalog lookups for users at that time.\n\nIf that weren't the issue, we could easily do much better than now, by\njust providing an errdetail() with recovery progress information. But we\npresumably don't want to spray such information to unauthenticated\nconnection attempts.\n\n\nI've vaguely wondered before whether it'd be worth having something like\nan \"admin\" socket somewhere in the data directory. Which explicitly\nwouldn't require authentication, have the cluster owner as the user,\netc. That'd not just be useful for monitoring during recovery, but also\nmake some interactions with the server easier for admin tools I think.\n\n\n\n> If I had to pick one of these two ideas, I'd pick the one the\n> log-based solution, since it seems easier to access and simplifies\n> retrospective analysis, but I suspect SQL access would be quite useful\n> for some users too, especially in cloud environments where \"just log\n> into the machine and have a look\" is not an option.\n\nHowever, leaving aside the implementation effort, the crazy idea\nabove would not easily address the issue of only being accessible with\nlocal access...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 20 Apr 2021 13:28:27 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "Hi,\n\nOn 2021-04-20 14:56:58 -0400, Tom Lane wrote:\n> I wonder though whether we really need authentication here. pg_ping\n> already exposes whether the database is up, to anyone who can reach the\n> postmaster port at all. Would it be so horrible if the \"can't accept\n> connections\" error message included a detail about \"recovery is X%\n> done\"?\n\nUnfortunately I think something like a percentage is hard to calculate\nright now. Even just looking at crash recovery (vs replication or\nPITR), we don't currently know where the WAL ends without reading all\nthe WAL. The easiest thing to return would be something in LSNs or\nbytes and I suspect that we don't want to expose either unauthenticated?\n\nI wonder if we ought to occasionally update something like\nControlFileData->minRecoveryPoint on primaries, similar to what we do on\nstandbys? Then we could actually calculate a percentage, and it'd have\nthe added advantage of allowing to detect more cases where the end of\nthe WAL was lost. Obviously we'd have to throttle it somehow, to avoid\nadding a lot of fsyncs, but that seems doable?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 20 Apr 2021 13:36:42 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "On 2021-Apr-20, Andres Freund wrote:\n\n> On 2021-04-19 13:55:13 -0400, Robert Haas wrote:\n> > Another possible approach would be to accept connections for\n> > monitoring purposes even during crash recovery. We can't allow access\n> > to any database at that point, since the system might not be\n> > consistent, but we could allow something like a replication connection\n> > (the non-database-associated variant). Maybe it would be precisely a\n> > replication connection and we'd just refuse all but a subset of\n> > commands, or maybe it would be some other kinds of thing. But either\n> > way you'd be able to issue a command in some mini-language saying \"so,\n> > tell me how startup is going\" and it would reply with a result set of\n> > some kind.\n> \n> The hard part about this seems to be how to perform authentication -\n> obviously we can't do catalog lookups for users at that time.\n\nMaybe a way to do this would involve some sort of monitoring cookie\nthat's obtained ahead of time (maybe at initdb time?) and is supplied to\nthe frontend by some OOB means. Then frontend can present that during\nstartup to the server, which ascertains its legitimacy without having to\naccess catalogs. Perhaps it even requires a specific pg_hba.conf rule.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"La verdad no siempre es bonita, pero el hambre de ella s�\"\n\n\n", "msg_date": "Tue, 20 Apr 2021 16:54:57 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "+1 for both log messages and allowing connections. I believe these two\ncomplement each other.\n\nIn the cloud world, we oftentimes want to monitor the progress of the\nrecovery without connecting to the server as the operators don't\nnecessarily have the required permissions to connect and query. Secondly,\nhaving this information in the log helps going back in time and understand\nwhere Postgres spent time during recovery.\n\nThe ability to query the server provides real time information and come\nhandy.\n\nThanks,\nSatya\n\n\n\nOn Mon, Apr 19, 2021 at 10:55 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> Hi,\n>\n> I've noticed that customers not infrequently complain that they start\n> postgres and then the system doesn't come up for a while and they have\n> no idea what's going on and are (understandably) worried. There are\n> probably a number of reasons why this can happen, but the ones that\n> seem to come up most often in my experience are (1) SyncDataDirectory\n> takes a long time, (b) ResetUnloggedRelations takes a long time, and\n> (c) there's a lot of WAL to apply so that takes a long time. It's\n> possible to distinguish this last case from the other two by looking\n> at the output of 'ps', but that's not super-convenient if your normal\n> method of access to the server is via libpq, and it only works if you\n> are monitoring it as it's happening rather than looking at the logs\n> after-the-fact. I am not sure there's any real way to distinguish the\n> other two cases without using strace or gdb or similar.\n>\n> It seems to me that we could do better. One approach would be to try\n> to issue a log message periodically - maybe once per minute, or some\n> configurable interval, e.g. perhaps add messages something like this:\n>\n> LOG: still syncing data directory, elapsed time %ld.%03d ms, current path\n> %s\n> LOG: data directory sync complete after %ld.%03d ms\n> LOG: still resetting unlogged relations, elapsed time %ld.%03d ms,\n> current path %s\n> LOG: unlogged relations reset after %ld.%03d ms\n> LOG: still performing crash recovery, elapsed time %ld.%03d ms,\n> current LSN %08X/%08X\n>\n> We already have a message when redo is complete, so there's no need\n> for another one. The implementation here doesn't seem too hard either:\n> the startup process would set a timer, when the timer expires the\n> signal handler sets a flag, at a convenient point we notice the flag\n> is set and responding by printing a message and clearing the flag.\n>\n> Another possible approach would be to accept connections for\n> monitoring purposes even during crash recovery. We can't allow access\n> to any database at that point, since the system might not be\n> consistent, but we could allow something like a replication connection\n> (the non-database-associated variant). Maybe it would be precisely a\n> replication connection and we'd just refuse all but a subset of\n> commands, or maybe it would be some other kinds of thing. But either\n> way you'd be able to issue a command in some mini-language saying \"so,\n> tell me how startup is going\" and it would reply with a result set of\n> some kind.\n>\n> If I had to pick one of these two ideas, I'd pick the one the\n> log-based solution, since it seems easier to access and simplifies\n> retrospective analysis, but I suspect SQL access would be quite useful\n> for some users too, especially in cloud environments where \"just log\n> into the machine and have a look\" is not an option.\n>\n> Thoughts?\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n>\n>\n\n+1 for both log messages and allowing connections. I believe these two complement each other. In the cloud world, we oftentimes want to monitor the progress of the recovery without connecting to the server as the operators don't necessarily have the required permissions to connect and query. Secondly, having this information in the log helps going back in time and understand where Postgres spent time during recovery.The ability to query the server provides real time information  and come handy.Thanks,SatyaOn Mon, Apr 19, 2021 at 10:55 AM Robert Haas <robertmhaas@gmail.com> wrote:Hi,\n\nI've noticed that customers not infrequently complain that they start\npostgres and then the system doesn't come up for a while and they have\nno idea what's going on and are (understandably) worried. There are\nprobably a number of reasons why this can happen, but the ones that\nseem to come up most often in my experience are (1) SyncDataDirectory\ntakes a long time, (b) ResetUnloggedRelations takes a long time, and\n(c) there's a lot of WAL to apply so that takes a long time. It's\npossible to distinguish this last case from the other two by looking\nat the output of 'ps', but that's not super-convenient if your normal\nmethod of access to the server is via libpq, and it only works if you\nare monitoring it as it's happening rather than looking at the logs\nafter-the-fact. I am not sure there's any real way to distinguish the\nother two cases without using strace or gdb or similar.\n\nIt seems to me that we could do better. One approach would be to try\nto issue a log message periodically - maybe once per minute, or some\nconfigurable interval, e.g. perhaps add messages something like this:\n\nLOG:  still syncing data directory, elapsed time %ld.%03d ms, current path %s\nLOG:  data directory sync complete after %ld.%03d ms\nLOG:  still resetting unlogged relations, elapsed time %ld.%03d ms,\ncurrent path %s\nLOG:  unlogged relations reset after %ld.%03d ms\nLOG:  still performing crash recovery, elapsed time %ld.%03d ms,\ncurrent LSN %08X/%08X\n\nWe already have a message when redo is complete, so there's no need\nfor another one. The implementation here doesn't seem too hard either:\nthe startup process would set a timer, when the timer expires the\nsignal handler sets a flag, at a convenient point we notice the flag\nis set and responding by printing a message and clearing the flag.\n\nAnother possible approach would be to accept connections for\nmonitoring purposes even during crash recovery. We can't allow access\nto any database at that point, since the system might not be\nconsistent, but we could allow something like a replication connection\n(the non-database-associated variant). Maybe it would be precisely a\nreplication connection and we'd just refuse all but a subset of\ncommands, or maybe it would be some other kinds of thing. But either\nway you'd be able to issue a command in some mini-language saying \"so,\ntell me how startup is going\" and it would reply with a result set of\nsome kind.\n\nIf I had to pick one of these two ideas, I'd pick the one the\nlog-based solution, since it seems easier to access and simplifies\nretrospective analysis, but I suspect SQL access would be quite useful\nfor some users too, especially in cloud environments where \"just log\ninto the machine and have a look\" is not an option.\n\nThoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 20 Apr 2021 14:11:16 -0700", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2021-04-20 14:56:58 -0400, Tom Lane wrote:\n> > I wonder though whether we really need authentication here. pg_ping\n> > already exposes whether the database is up, to anyone who can reach the\n> > postmaster port at all. Would it be so horrible if the \"can't accept\n> > connections\" error message included a detail about \"recovery is X%\n> > done\"?\n> \n> Unfortunately I think something like a percentage is hard to calculate\n> right now. Even just looking at crash recovery (vs replication or\n> PITR), we don't currently know where the WAL ends without reading all\n> the WAL. The easiest thing to return would be something in LSNs or\n> bytes and I suspect that we don't want to expose either unauthenticated?\n\nWhile it obviously wouldn't be exactly accurate, I wonder if we couldn't\njust look at the WAL files we have to reply and then guess that we'll go\nthrough about half of them before we reach the end..? I mean, wouldn't\nexactly be the first time that a percentage progress report wasn't\ncompletely accurate. :)\n\n> I wonder if we ought to occasionally update something like\n> ControlFileData->minRecoveryPoint on primaries, similar to what we do on\n> standbys? Then we could actually calculate a percentage, and it'd have\n> the added advantage of allowing to detect more cases where the end of\n> the WAL was lost. Obviously we'd have to throttle it somehow, to avoid\n> adding a lot of fsyncs, but that seems doable?\n\nThis seems to go against Tom's concerns wrt rewriting pg_control.\nPerhaps we could work through a solution to that, which would be nice,\nbut I'm not sure that we need the percentage to be super accurate\nanyway, though, ideally, we'd work it out so that it's always increasing\nand doesn't look \"stuck\" as long as we're actually moving forward\nthrough the WAL.\n\nMaybe a heuristic of 'look at the end of the WAL files, assume we'll go\nthrough 50% of it, but only consider that to be 90%, with the last 10%\ngoing from half-way through the WAL to the actual end of the WAL\navailable.\"\n\nYes, such heuristics are terrible, but they're also relatively simple\nand wouldn't require tracking anything additional and would, maybe,\navoid the concern about needing to authenticate the user..\n\nThanks,\n\nStephen", "msg_date": "Wed, 21 Apr 2021 14:36:24 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Andres Freund (andres@anarazel.de) wrote:\n>> On 2021-04-20 14:56:58 -0400, Tom Lane wrote:\n>>> I wonder though whether we really need authentication here. pg_ping\n>>> already exposes whether the database is up, to anyone who can reach the\n>>> postmaster port at all. Would it be so horrible if the \"can't accept\n>>> connections\" error message included a detail about \"recovery is X%\n>>> done\"?\n\n>> Unfortunately I think something like a percentage is hard to calculate\n>> right now.\n\n> While it obviously wouldn't be exactly accurate, I wonder if we couldn't\n> just look at the WAL files we have to reply and then guess that we'll go\n> through about half of them before we reach the end..? I mean, wouldn't\n> exactly be the first time that a percentage progress report wasn't\n> completely accurate. :)\n\nOr we could skip all the guessing and just print something like what\nthe startup process exposes in ps status, ie \"currently processing\nWAL file so-and-so\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 21 Apr 2021 14:43:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "Hi,\n\nOn 2021-04-21 14:36:24 -0400, Stephen Frost wrote:\n> * Andres Freund (andres@anarazel.de) wrote:\n> > Unfortunately I think something like a percentage is hard to calculate\n> > right now. Even just looking at crash recovery (vs replication or\n> > PITR), we don't currently know where the WAL ends without reading all\n> > the WAL. The easiest thing to return would be something in LSNs or\n> > bytes and I suspect that we don't want to expose either unauthenticated?\n> \n> While it obviously wouldn't be exactly accurate, I wonder if we couldn't\n> just look at the WAL files we have to reply and then guess that we'll go\n> through about half of them before we reach the end..? I mean, wouldn't\n> exactly be the first time that a percentage progress report wasn't\n> completely accurate. :)\n\nI don't think that'd work well, due to WAL segment recycling. We rename\nWAL files into place when removing them, and sometimes that can be a\n*lot* of files. It's one thing for there to be a ~20% inaccuracy in\nestimated amount of work, another to have misestimates on the order of\nmagnitudes.\n\n\n\n> > I wonder if we ought to occasionally update something like\n> > ControlFileData->minRecoveryPoint on primaries, similar to what we do on\n> > standbys? Then we could actually calculate a percentage, and it'd have\n> > the added advantage of allowing to detect more cases where the end of\n> > the WAL was lost. Obviously we'd have to throttle it somehow, to avoid\n> > adding a lot of fsyncs, but that seems doable?\n> \n> This seems to go against Tom's concerns wrt rewriting pg_control.\n\nI don't think that concern equally applies for what I am proposing\nhere. For one, we already have minRecoveryPoint in ControlData, and we\nalready use it for the purpose of determining where we need to recover\nto, albeit only during crash recovery. Imo that's substantially\ndifferent from adding actual recovery progress status information to the\ncontrol file.\n\nI also think that it'd actually be a significant reliability improvement\nif we maintained an approximate minRecoveryPoint during normal running:\nI've seen way too many cases where WAL files were lost / removed and\ncrash recovery just started up happily. Only hitting problems months\ndown the line. Yes, it'd obviously not bullet proof, since we'd not want\nto add a significant stream of new fsyncs, but IME such WAL files\nlost/removed issues tend not to be about a few hundred bytes of WAL but\nmany segments missing.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 21 Apr 2021 12:36:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2021-04-21 14:36:24 -0400, Stephen Frost wrote:\n> > * Andres Freund (andres@anarazel.de) wrote:\n> > > Unfortunately I think something like a percentage is hard to calculate\n> > > right now. Even just looking at crash recovery (vs replication or\n> > > PITR), we don't currently know where the WAL ends without reading all\n> > > the WAL. The easiest thing to return would be something in LSNs or\n> > > bytes and I suspect that we don't want to expose either unauthenticated?\n> > \n> > While it obviously wouldn't be exactly accurate, I wonder if we couldn't\n> > just look at the WAL files we have to reply and then guess that we'll go\n> > through about half of them before we reach the end..? I mean, wouldn't\n> > exactly be the first time that a percentage progress report wasn't\n> > completely accurate. :)\n> \n> I don't think that'd work well, due to WAL segment recycling. We rename\n> WAL files into place when removing them, and sometimes that can be a\n> *lot* of files. It's one thing for there to be a ~20% inaccuracy in\n> estimated amount of work, another to have misestimates on the order of\n> magnitudes.\n\nI mean- we actively try to guess at how many WAL files we'll need during\neach checkpoint and if we're doing that decently then it'd hopefully be\non about the order of half the files, as I suggested, that we'll end up\ngoing through at any point in time. Naturally, it'll be different if\nthere's a forced checkpoint or a sudden spike of activity, but I'm not\nsure that it's an entirely unreasonable place to start if we're going to\nbe guessing at it.\n\n> > > I wonder if we ought to occasionally update something like\n> > > ControlFileData->minRecoveryPoint on primaries, similar to what we do on\n> > > standbys? Then we could actually calculate a percentage, and it'd have\n> > > the added advantage of allowing to detect more cases where the end of\n> > > the WAL was lost. Obviously we'd have to throttle it somehow, to avoid\n> > > adding a lot of fsyncs, but that seems doable?\n> > \n> > This seems to go against Tom's concerns wrt rewriting pg_control.\n> \n> I don't think that concern equally applies for what I am proposing\n> here. For one, we already have minRecoveryPoint in ControlData, and we\n> already use it for the purpose of determining where we need to recover\n> to, albeit only during crash recovery. Imo that's substantially\n> different from adding actual recovery progress status information to the\n> control file.\n\nI agree that it's not the same as adding actual recovery progress status\ninformation.\n\n> I also think that it'd actually be a significant reliability improvement\n> if we maintained an approximate minRecoveryPoint during normal running:\n> I've seen way too many cases where WAL files were lost / removed and\n> crash recovery just started up happily. Only hitting problems months\n> down the line. Yes, it'd obviously not bullet proof, since we'd not want\n> to add a significant stream of new fsyncs, but IME such WAL files\n> lost/removed issues tend not to be about a few hundred bytes of WAL but\n> many segments missing.\n\nI do agree that it's definitely a problem and one that I've seen as well\nwhere we think we reach the end of recovery even though we didn't\nactually. Having a way to avoid that happening would be quite nice. It\ndoes seem like we have some trade-offs here to weigh, but pg_control is\nindeed quite small..\n\nThanks,\n\nStephen", "msg_date": "Wed, 21 Apr 2021 15:51:38 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "Hi,\n\nOn 2021-04-21 15:51:38 -0400, Stephen Frost wrote:\n> It does seem like we have some trade-offs here to weigh, but\n> pg_control is indeed quite small..\n\nWhat do you mean by that? That the overhead of writing it out more\nfrequently wouldn't be too bad? Or that we shouldn't \"unnecessarily\" add\nmore fields to it?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 21 Apr 2021 13:00:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2021-04-21 15:51:38 -0400, Stephen Frost wrote:\n> > It does seem like we have some trade-offs here to weigh, but\n> > pg_control is indeed quite small..\n> \n> What do you mean by that? That the overhead of writing it out more\n> frequently wouldn't be too bad? Or that we shouldn't \"unnecessarily\" add\n> more fields to it?\n\nMostly just that the added overhead in writing it out more frequently\nwouldn't be too bad. Adding fields runs the risk of crossing the\nthreshold where we feel that we can safely assume all of it will make it\nto disk in one shot and therefore there's more reason to not add extra\nfields to it, if possible.\n\nSeems the missing bit here is \"how often, and how do we make that\nhappen?\" and then we can discuss if there's reason to be concerned that\nit would be 'too frequent' or cause too much additional overhead in\nterms of IO/fsyncs.\n\nThanks,\n\nStephen", "msg_date": "Wed, 21 Apr 2021 16:28:26 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Andres Freund (andres@anarazel.de) wrote:\n>> What do you mean by that? That the overhead of writing it out more\n>> frequently wouldn't be too bad? Or that we shouldn't \"unnecessarily\" add\n>> more fields to it?\n\n> Mostly just that the added overhead in writing it out more frequently\n> wouldn't be too bad.\n\nMy concern about it was not at all about performance, but that every time\nyou write it is a new opportunity for the filesystem to lose or corrupt\nthe data.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 21 Apr 2021 16:55:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "Hi,\n\nOn 2021-04-21 16:28:26 -0400, Stephen Frost wrote:\n> * Andres Freund (andres@anarazel.de) wrote:\n> > On 2021-04-21 15:51:38 -0400, Stephen Frost wrote:\n> > > It does seem like we have some trade-offs here to weigh, but\n> > > pg_control is indeed quite small..\n> > \n> > What do you mean by that? That the overhead of writing it out more\n> > frequently wouldn't be too bad? Or that we shouldn't \"unnecessarily\" add\n> > more fields to it?\n> \n> Mostly just that the added overhead in writing it out more frequently\n> wouldn't be too bad.\n>\n> Seems the missing bit here is \"how often, and how do we make that\n> happen?\" and then we can discuss if there's reason to be concerned that\n> it would be 'too frequent' or cause too much additional overhead in\n> terms of IO/fsyncs.\n\nThe number of writes and the number of fsyncs of the control file\nwouldn't necessarily have to be the same. We could e.g. update the file\nonce per segment, but only fsync it at a lower cadence. We already rely\non handling writes-without-fsync of the control file (which is trivial\ndue to the <= 512 byte limit).\n\nAnother interesting question is where we'd do the update from. It seems\nlike it ought to be some background process:\n\nI can see doing it in the checkpointer - but there's a few phases that\ncan take a while (e.g. sync) where currently don't call something like\nCheckpointWriteDelay() on a regular basis.\n\nI also can see doing it in bgwriter - none of the work it does should\ntake all that long, and minor increases in latency ought not to have\nmuch of an impact.\n\nWal writer seems less suitable, some workloads are sensitive to it not\ngetting around doing what it ought to do.\n\n\n> Adding fields runs the risk of crossing the\n> threshold where we feel that we can safely assume all of it will make it\n> to disk in one shot and therefore there's more reason to not add extra\n> fields to it, if possible.\n\nYea, we really should stay below 512 bytes (sector size). We're at 296\nright now, with 20 bytes lost to padding. If we got close to the limit\nwe could easily move some of the contents out of pg_control - we\ne.g. don't need to write out all the compile time values all the time,\nthey could live in a file similar to PG_VERSION instead. So I'm not too\nconcerned right now. But we also don't need to add anything, given that\nwe already have minRecoveryPoint.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 21 Apr 2021 13:58:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "Hi,\n\nOn 2021-04-21 16:55:28 -0400, Tom Lane wrote:\n> My concern about it was not at all about performance, but that every time\n> you write it is a new opportunity for the filesystem to lose or corrupt\n> the data.\n\nWe already do, sometimes very frequent, control file updates on standbys\nto update minRecoveryLSN. I don't recall reports of that causing\ncorruption issues. So I'd not be too concerned about that aspect?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 21 Apr 2021 14:00:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "Greetings,\n\nOn Wed, Apr 21, 2021 at 17:01 Andres Freund <andres@anarazel.de> wrote:\n\n> On 2021-04-21 16:55:28 -0400, Tom Lane wrote:\n> > My concern about it was not at all about performance, but that every time\n> > you write it is a new opportunity for the filesystem to lose or corrupt\n> > the data.\n>\n> We already do, sometimes very frequent, control file updates on standbys\n> to update minRecoveryLSN. I don't recall reports of that causing\n> corruption issues. So I'd not be too concerned about that aspect?\n\n\nOr perhaps we should consider having multiple copies..? Though I\ndefinitely have seen missing WAL causing difficult to realize / detect\ncorruption more than corrupt pg_control files...\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Wed, Apr 21, 2021 at 17:01 Andres Freund <andres@anarazel.de> wrote:\nOn 2021-04-21 16:55:28 -0400, Tom Lane wrote:\n> My concern about it was not at all about performance, but that every time\n> you write it is a new opportunity for the filesystem to lose or corrupt\n> the data.\n\nWe already do, sometimes very frequent, control file updates on standbys\nto update minRecoveryLSN. I don't recall reports of that causing\ncorruption issues. So I'd not be too concerned about that aspect?Or perhaps we should consider having multiple copies..?  Though I definitely have seen missing WAL causing difficult to realize / detect corruption more than corrupt pg_control files...Thanks,Stephen", "msg_date": "Wed, 21 Apr 2021 17:04:18 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "On Wed, 21 Apr 2021 12:36:05 -0700\nAndres Freund <andres@anarazel.de> wrote:\n\n> [...] \n> \n> I don't think that concern equally applies for what I am proposing\n> here. For one, we already have minRecoveryPoint in ControlData, and we\n> already use it for the purpose of determining where we need to recover\n> to, albeit only during crash recovery. Imo that's substantially\n> different from adding actual recovery progress status information to the\n> control file.\n\nJust for the record, when I was talking about updating status of the startup\nin the controldata, I was thinking about setting the last known LSN replayed.\nNot some kind of variable string.\n\n> \n> I also think that it'd actually be a significant reliability improvement\n> if we maintained an approximate minRecoveryPoint during normal running:\n> I've seen way too many cases where WAL files were lost / removed and\n> crash recovery just started up happily. Only hitting problems months\n> down the line. Yes, it'd obviously not bullet proof, since we'd not want\n> to add a significant stream of new fsyncs, but IME such WAL files\n> lost/removed issues tend not to be about a few hundred bytes of WAL but\n> many segments missing.\n\nMaybe setting this minRecoveryPoint once per segment would be enough, near\nfrom the beginning of the WAL. At least, the recovery process would be\nforced to actually replay until the very last known segment.\n\nRegards,\n\n\n", "msg_date": "Thu, 22 Apr 2021 01:09:19 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "Hi,\n\nAs nobody opposed the log based approach, I thought of creating a\npatch using this approach. Please find the patch attached.\n\nIntroduced the new GUC variable 'log_min_duration_startup_process'\nwhich indicates the time period after every time period it logs the\nprogress of the process startup when it is set to a value (In\nmillisecond) greater than zero. if it is set to zero, then it logs all\navailable messages. If it is set to -1, then disables the feature.\n\n> There are probably a number of reasons why this can happen, but the\n> ones that seem to come up most often in my experience are\n> (1) SyncDataDirectory takes a long time, (b) ResetUnloggedRelations\n> takes a long time, and (c) there's a lot of WAL to apply so that takes a\n> long time.\n\nI have added the proper logs in all of the above scenarios.\n\nFollowing is the sample log displayed during server startup when the\ntime period is set to 10ms.\n\n2021-06-04 19:40:06.390 IST [51116] LOG: Syncing data directory,\nelapsed time: 14.165 ms, current path: ./base/13892/16384_fsm\n2021-06-04 19:40:06.399 IST [51116] LOG: Syncing data directory\ncompleted after 22.661 ms\n2021-06-04 19:40:06.399 IST [51116] LOG: database system was not\nproperly shut down; automatic recovery in progress\n2021-06-04 19:40:06.401 IST [51116] LOG: Resetting unlogged relations\ncompleted after 0.219 ms\n2021-06-04 19:40:06.401 IST [51116] LOG: redo starts at 0/4728B88\n2021-06-04 19:40:06.411 IST [51116] LOG: Performing crash recovery,\nelapsed time: 10.002 ms, current LSN: 0/47AA998\n2021-06-04 19:40:06.421 IST [51116] LOG: Performing crash recovery,\nelapsed time: 20.002 ms, current LSN: 0/4838D80\n2021-06-04 19:40:06.431 IST [51116] LOG: Performing crash recovery,\nelapsed time: 30.002 ms, current LSN: 0/48DA718\n2021-06-04 19:40:06.441 IST [51116] LOG: Performing crash recovery,\nelapsed time: 40.002 ms, current LSN: 0/49791C0\n.\n.\n.\n2021-06-04 19:40:07.222 IST [51116] LOG: Performing crash recovery,\nelapsed time: 820.805 ms, current LSN: 0/76F6F10\n2021-06-04 19:40:07.227 IST [51116] LOG: invalid record length at\n0/774E758: wanted 24, got 0\n2021-06-04 19:40:07.227 IST [51116] LOG: redo done at 0/774E730\nsystem usage: CPU: user: 0.77 s, system: 0.03 s, elapsed: 0.82 s\n\nKindly let me know if any changes are required.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Thu, Apr 22, 2021 at 4:39 AM Jehan-Guillaume de Rorthais\n<jgdr@dalibo.com> wrote:\n>\n> On Wed, 21 Apr 2021 12:36:05 -0700\n> Andres Freund <andres@anarazel.de> wrote:\n>\n> > [...]\n> >\n> > I don't think that concern equally applies for what I am proposing\n> > here. For one, we already have minRecoveryPoint in ControlData, and we\n> > already use it for the purpose of determining where we need to recover\n> > to, albeit only during crash recovery. Imo that's substantially\n> > different from adding actual recovery progress status information to the\n> > control file.\n>\n> Just for the record, when I was talking about updating status of the startup\n> in the controldata, I was thinking about setting the last known LSN replayed.\n> Not some kind of variable string.\n>\n> >\n> > I also think that it'd actually be a significant reliability improvement\n> > if we maintained an approximate minRecoveryPoint during normal running:\n> > I've seen way too many cases where WAL files were lost / removed and\n> > crash recovery just started up happily. Only hitting problems months\n> > down the line. Yes, it'd obviously not bullet proof, since we'd not want\n> > to add a significant stream of new fsyncs, but IME such WAL files\n> > lost/removed issues tend not to be about a few hundred bytes of WAL but\n> > many segments missing.\n>\n> Maybe setting this minRecoveryPoint once per segment would be enough, near\n> from the beginning of the WAL. At least, the recovery process would be\n> forced to actually replay until the very last known segment.\n>\n> Regards,\n>\n>", "msg_date": "Fri, 4 Jun 2021 19:49:21 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "On Fri, Jun 04, 2021 at 07:49:21PM +0530, Nitin Jadhav wrote:\n> I have added the proper logs in all of the above scenarios.\n> \n> Following is the sample log displayed during server startup when the\n> time period is set to 10ms.\n> \n> 2021-06-04 19:40:06.390 IST [51116] LOG: Syncing data directory, elapsed time: 14.165 ms, current path: ./base/13892/16384_fsm\n> 2021-06-04 19:40:06.399 IST [51116] LOG: Syncing data directory completed after 22.661 ms\n\n|2021-06-04 19:40:07.222 IST [51116] LOG: Performing crash recovery, elapsed time: 820.805 ms, current LSN: 0/76F6F10\n|2021-06-04 19:40:07.227 IST [51116] LOG: invalid record length at 0/774E758: wanted 24, got 0\n|2021-06-04 19:40:07.227 IST [51116] LOG: redo done at 0/774E730 system usage: CPU: user: 0.77 s, system: 0.03 s, elapsed: 0.82 s\n\nShould it show the rusage ? It's shown at startup completion since 10a5b35a0,\nso it seems strange not to show it here.\n\n+ log_startup_process_progress(\"Syncing data directory\", path, false);\n\nI think the fsync vs syncfs paths should be distinguished: \"Syncing data\ndirectory (fsync)\" vs \"Syncing data directory (syncfs)\".\n\n+ {\"log_min_duration_startup_process\", PGC_SUSET, LOGGING_WHEN,\n\nI think it should be PGC_SIGHUP, to allow changing it during runtime.\nObviously it has no effect except during startup, but the change will be\neffective if the current process crashes.\nSee also: https://www.postgresql.org/message-id/20210526001359.GE3676@telsasoft.com\n\n+extern void log_startup_process_progress(char *operation, void *data,\n+ bool is_complete);\n\nI think this should take an enum operation, rather than using strcmp() on it\nlater. The enum values might be RECOVERY_START, RECOVERY_END, rather than\nhaving a bool is_complete.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 6 Jun 2021 17:23:05 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "On Sun, Jun 6, 2021 at 6:23 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Should it show the rusage ? It's shown at startup completion since 10a5b35a0,\n> so it seems strange not to show it here.\n\nI don't know, that seems like it's going to make the messages awfully\nlong, and I'm not sure of what use it is to see that for every report.\n\nI don't like the name very much. log_min_duration_startup_process\nseems to have been chosen to correspond to log_min_duration_statement,\nbut the semantics are different. That one is a threshold, whereas this\none is an interval. Maybe something like\nlog_startup_progress_interval?\n\nAs far as the patch itself goes, I think that the overhead of this\napproach is going to be unacceptably high. I was imagining having a\ntimer running in the background that fires periodically, with the\ninterval handler just setting a flag. Then in the foreground we just\nneed to check whether the flag is set. I doubt that we can get away\nwith a GetCurrentTimestamp() after applying every WAL record ... that\nseems like it will be slow.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 7 Jun 2021 09:21:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> ... I doubt that we can get away\n> with a GetCurrentTimestamp() after applying every WAL record ... that\n> seems like it will be slow.\n\nYeah, that's going to be pretty awful even on machines with fast\ngettimeofday, never mind ones where it isn't.\n\nIt should be possible to use utils/misc/timeout.c to manage the\ninterrupt, I'd think.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Jun 2021 09:42:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "> Should it show the rusage ? It's shown at startup completion since 10a5b35a0,\n> so it seems strange not to show it here.\n\n> I don't know, that seems like it's going to make the messages awfully\n> long, and I'm not sure of what use it is to see that for every report.\n\nI have not changed anything wrt this. If it is really required to\nchange, then I will change.\n\n> + log_startup_process_progress(\"Syncing data directory\", path, false);\n\n> I think the fsync vs syncfs paths should be distinguished: \"Syncing data\n> directory (fsync)\" vs \"Syncing data directory (syncfs)\".\n\nFixed.\n\n> + {\"log_min_duration_startup_process\", PGC_SUSET, LOGGING_WHEN,\n>\n> I think it should be PGC_SIGHUP, to allow changing it during runtime.\n> Obviously it has no effect except during startup, but the change will be\n> effective if the current process crashes.\n> See also: https://www.postgresql.org/message-id/20210526001359.GE3676@telsasoft.com\n\nI did not get exactly how it will change behaviour. In my\nunderstanding, when the server restarts after a crash, it fetches the\nvalue from the config file. So if there is any change that gets\naffected. Kindly correct me if I am wrong.\n\n> +extern void log_startup_process_progress(char *operation, void *data,\n> + bool is_complete);\n>\n> I think this should take an enum operation, rather than using strcmp() on it\n> later. The enum values might be RECOVERY_START, RECOVERY_END, rather than\n> having a bool is_complete.\n\nFixed.\n\n> I don't like the name very much. log_min_duration_startup_process\n> seems to have been chosen to correspond to log_min_duration_statement,\n> but the semantics are different. That one is a threshold, whereas this\n> one is an interval. Maybe something like\n> log_startup_progress_interval?\n\nYes. This looks more appropriate. Fixed in the attached patch.\n\n> As far as the patch itself goes, I think that the overhead of this\n> approach is going to be unacceptably high. I was imagining having a\n> timer running in the background that fires periodically, with the\n> interval handler just setting a flag. Then in the foreground we just\n> need to check whether the flag is set. I doubt that we can get away\n> with a GetCurrentTimestamp() after applying every WAL record ... that\n> seems like it will be slow.\n\nThanks for correcting me. This approach is far better than what I had\nused earlier. I have done the code changes as per your approach in the\nattached patch.\n\nKindly let me know if any changes are required.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Mon, Jun 7, 2021 at 7:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > ... I doubt that we can get away\n> > with a GetCurrentTimestamp() after applying every WAL record ... that\n> > seems like it will be slow.\n>\n> Yeah, that's going to be pretty awful even on machines with fast\n> gettimeofday, never mind ones where it isn't.\n>\n> It should be possible to use utils/misc/timeout.c to manage the\n> interrupt, I'd think.\n>\n> regards, tom lane", "msg_date": "Wed, 9 Jun 2021 17:09:54 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "On Wed, Jun 09, 2021 at 05:09:54PM +0530, Nitin Jadhav wrote:\n> > + {\"log_min_duration_startup_process\", PGC_SUSET, LOGGING_WHEN,\n> >\n> > I think it should be PGC_SIGHUP, to allow changing it during runtime.\n> > Obviously it has no effect except during startup, but the change will be\n> > effective if the current process crashes.\n> > See also: https://www.postgresql.org/message-id/20210526001359.GE3676@telsasoft.com\n> \n> I did not get exactly how it will change behaviour. In my\n> understanding, when the server restarts after a crash, it fetches the\n> value from the config file. So if there is any change that gets\n> affected. Kindly correct me if I am wrong.\n\nI don't think so. I checked and SelectConfigFiles is called only once to read\nconfig files and cmdline args. And not called on restart_after_crash.\n\nThe GUC definitely isn't SUSET, since it's not useful to write in a (super)\nuser session SET log_min_duration_startup_process=123.\n\nI've triple checked the behavior using a patch I submitted for Thomas' syncfs\nfeature. ALTER SYSTEM recovery_init_sync_method=syncfs was not picked up when\nI sent SIGABRT. But with my patch, if I also do SELECT pg_reload_conf(), then\na future crash uses syncfs.\nhttps://www.postgresql.org/message-id/20210526001359.GE3676@telsasoft.com\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 9 Jun 2021 11:19:18 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "> > > + {\"log_min_duration_startup_process\", PGC_SUSET, LOGGING_WHEN,\n> > >\n> > > I think it should be PGC_SIGHUP, to allow changing it during runtime.\n> > > Obviously it has no effect except during startup, but the change will be\n> > > effective if the current process crashes.\n> > > See also: https://www.postgresql.org/message-id/20210526001359.GE3676@telsasoft.com\n> >\n> > I did not get exactly how it will change behaviour. In my\n> > understanding, when the server restarts after a crash, it fetches the\n> > value from the config file. So if there is any change that gets\n> > affected. Kindly correct me if I am wrong.\n\nSorry my understanding was wrong. But I'm not completely convinced\nwith the above description saying that the change will be effective if\nthe current process crashes.\nAFAIK, whenever we set the GucContext less than PGC_SIGHUP (that is\neither PGC_POSTMASTER or PGC_INTERNAL) then any change in the config\nfile will not get affected during restart after crash. If the\nGucContext is greater than or equal to PGC_SIGHUP, then any change in\nthe config file will be changed once it receives the SIGHUP signal. So\nit gets affected by a restart after a crash. So since the GucContext\nset here is PGC_SUSET which is greater than PGC_SIGHUP, there is no\nchange in the behaviour wrt this point.\n\n> I've triple checked the behavior using a patch I submitted for Thomas' syncfs\n> feature. ALTER SYSTEM recovery_init_sync_method=syncfs was not picked up when\n> I sent SIGABRT. But with my patch, if I also do SELECT pg_reload_conf(), then\n> a future crash uses syncfs.\n> https://www.postgresql.org/message-id/20210526001359.GE3676@telsasoft.com\n\nThe difference is since the behaviour is compared between\nPGC_POSTMASTER and PGC_SIGHUP.\n\n> The GUC definitely isn't SUSET, since it's not useful to write in a (super)\n> user session SET log_min_duration_startup_process=123.\nI agree with this. I may have to change this value as setting in a\nuser session is not at all useful. But I am confused between\nPGC_POSTMASTER and PGC_SIGHUP. We should use PGC_SIGHUP if we would\nlike to allow the change during restart after a crash. Otherwise\nPGC_POSTMASTER would be sufficient. Kindly share your thoughts.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Wed, Jun 9, 2021 at 9:49 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, Jun 09, 2021 at 05:09:54PM +0530, Nitin Jadhav wrote:\n> > > + {\"log_min_duration_startup_process\", PGC_SUSET, LOGGING_WHEN,\n> > >\n> > > I think it should be PGC_SIGHUP, to allow changing it during runtime.\n> > > Obviously it has no effect except during startup, but the change will be\n> > > effective if the current process crashes.\n> > > See also: https://www.postgresql.org/message-id/20210526001359.GE3676@telsasoft.com\n> >\n> > I did not get exactly how it will change behaviour. In my\n> > understanding, when the server restarts after a crash, it fetches the\n> > value from the config file. So if there is any change that gets\n> > affected. Kindly correct me if I am wrong.\n>\n> I don't think so. I checked and SelectConfigFiles is called only once to read\n> config files and cmdline args. And not called on restart_after_crash.\n>\n> The GUC definitely isn't SUSET, since it's not useful to write in a (super)\n> user session SET log_min_duration_startup_process=123.\n>\n> I've triple checked the behavior using a patch I submitted for Thomas' syncfs\n> feature. ALTER SYSTEM recovery_init_sync_method=syncfs was not picked up when\n> I sent SIGABRT. But with my patch, if I also do SELECT pg_reload_conf(), then\n> a future crash uses syncfs.\n> https://www.postgresql.org/message-id/20210526001359.GE3676@telsasoft.com\n>\n> --\n> Justin\n\n\n", "msg_date": "Thu, 10 Jun 2021 15:19:20 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "On Thu, Jun 10, 2021 at 03:19:20PM +0530, Nitin Jadhav wrote:\n> > > > + {\"log_min_duration_startup_process\", PGC_SUSET, LOGGING_WHEN,\n> > > >\n> > > > I think it should be PGC_SIGHUP, to allow changing it during runtime.\n> > > > Obviously it has no effect except during startup, but the change will be\n> > > > effective if the current process crashes.\n> > > > See also: https://www.postgresql.org/message-id/20210526001359.GE3676@telsasoft.com\n> > >\n> > > I did not get exactly how it will change behaviour. In my\n> > > understanding, when the server restarts after a crash, it fetches the\n> > > value from the config file. So if there is any change that gets\n> > > affected. Kindly correct me if I am wrong.\n> \n> Sorry my understanding was wrong. But I'm not completely convinced\n> with the above description saying that the change will be effective if\n> the current process crashes.\n> AFAIK, whenever we set the GucContext less than PGC_SIGHUP (that is\n> either PGC_POSTMASTER or PGC_INTERNAL) then any change in the config\n> file will not get affected during restart after crash. If the\n> GucContext is greater than or equal to PGC_SIGHUP, then any change in\n> the config file will be changed once it receives the SIGHUP signal. So\n> it gets affected by a restart after a crash. So since the GucContext\n> set here is PGC_SUSET which is greater than PGC_SIGHUP, there is no\n> change in the behaviour wrt this point.\n\nSince you agreed that SUSET was wrong, and PGC_POSTMASTER doesn't allow\nchanging the value without restart, doesn't it follow that SIGHUP is what's\nwanted ?\n\n> > I've triple checked the behavior using a patch I submitted for Thomas' syncfs\n> > feature. ALTER SYSTEM recovery_init_sync_method=syncfs was not picked up when\n> > I sent SIGABRT. But with my patch, if I also do SELECT pg_reload_conf(), then\n> > a future crash uses syncfs.\n> > https://www.postgresql.org/message-id/20210526001359.GE3676@telsasoft.com\n> \n> The difference is since the behaviour is compared between\n> PGC_POSTMASTER and PGC_SIGHUP.\n> \n> > The GUC definitely isn't SUSET, since it's not useful to write in a (super)\n> > user session SET log_min_duration_startup_process=123.\n> I agree with this. I may have to change this value as setting in a\n> user session is not at all useful. But I am confused between\n> PGC_POSTMASTER and PGC_SIGHUP. We should use PGC_SIGHUP if we would\n> like to allow the change during restart after a crash. Otherwise\n> PGC_POSTMASTER would be sufficient. Kindly share your thoughts.\n> \n> Thanks & Regards,\n> Nitin Jadhav\n> \n> On Wed, Jun 9, 2021 at 9:49 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Wed, Jun 09, 2021 at 05:09:54PM +0530, Nitin Jadhav wrote:\n> > > > + {\"log_min_duration_startup_process\", PGC_SUSET, LOGGING_WHEN,\n> > > >\n> > > > I think it should be PGC_SIGHUP, to allow changing it during runtime.\n> > > > Obviously it has no effect except during startup, but the change will be\n> > > > effective if the current process crashes.\n> > > > See also: https://www.postgresql.org/message-id/20210526001359.GE3676@telsasoft.com\n> > >\n> > > I did not get exactly how it will change behaviour. In my\n> > > understanding, when the server restarts after a crash, it fetches the\n> > > value from the config file. So if there is any change that gets\n> > > affected. Kindly correct me if I am wrong.\n> >\n> > I don't think so. I checked and SelectConfigFiles is called only once to read\n> > config files and cmdline args. And not called on restart_after_crash.\n> >\n> > The GUC definitely isn't SUSET, since it's not useful to write in a (super)\n> > user session SET log_min_duration_startup_process=123.\n> >\n> > I've triple checked the behavior using a patch I submitted for Thomas' syncfs\n> > feature. ALTER SYSTEM recovery_init_sync_method=syncfs was not picked up when\n> > I sent SIGABRT. But with my patch, if I also do SELECT pg_reload_conf(), then\n> > a future crash uses syncfs.\n> > https://www.postgresql.org/message-id/20210526001359.GE3676@telsasoft.com\n\n\n", "msg_date": "Thu, 10 Jun 2021 05:01:25 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "> Since you agreed that SUSET was wrong, and PGC_POSTMASTER doesn't allow\n> changing the value without restart, doesn't it follow that SIGHUP is what's\n> wanted ?\n\nYes. I have done the changes in the attached patch.\n\nApart from this, I have done a few other changes to the patch. The\nchanges include\n\n1. Renamed 'InitCurrentOperation' to 'InitStartupProgress()'.\n2. Divided the functionality of 'LogStartupProgress()' into 2 parts.\nOne for logging the progress and the other to log the completion\ninformation. The first part's function name remains as is and a new\nfunction 'CloseStartupProgress()' added for the second part.\n3. In case of any invalid operations found during logging of the\nstartup progress, throwing an error. This is not a concern unless the\ndeveloper makes a mistake.\n4. Modified the 'StartupProcessOp' enums like 'FSYNC_START' to\n'FSYNC_IN_PROGRESS' for better readability.\n5. Updated the comments and some cosmetic changes.\n\nKindly share your comments.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Thu, Jun 10, 2021 at 3:31 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Jun 10, 2021 at 03:19:20PM +0530, Nitin Jadhav wrote:\n> > > > > + {\"log_min_duration_startup_process\", PGC_SUSET, LOGGING_WHEN,\n> > > > >\n> > > > > I think it should be PGC_SIGHUP, to allow changing it during runtime.\n> > > > > Obviously it has no effect except during startup, but the change will be\n> > > > > effective if the current process crashes.\n> > > > > See also: https://www.postgresql.org/message-id/20210526001359.GE3676@telsasoft.com\n> > > >\n> > > > I did not get exactly how it will change behaviour. In my\n> > > > understanding, when the server restarts after a crash, it fetches the\n> > > > value from the config file. So if there is any change that gets\n> > > > affected. Kindly correct me if I am wrong.\n> >\n> > Sorry my understanding was wrong. But I'm not completely convinced\n> > with the above description saying that the change will be effective if\n> > the current process crashes.\n> > AFAIK, whenever we set the GucContext less than PGC_SIGHUP (that is\n> > either PGC_POSTMASTER or PGC_INTERNAL) then any change in the config\n> > file will not get affected during restart after crash. If the\n> > GucContext is greater than or equal to PGC_SIGHUP, then any change in\n> > the config file will be changed once it receives the SIGHUP signal. So\n> > it gets affected by a restart after a crash. So since the GucContext\n> > set here is PGC_SUSET which is greater than PGC_SIGHUP, there is no\n> > change in the behaviour wrt this point.\n>\n> Since you agreed that SUSET was wrong, and PGC_POSTMASTER doesn't allow\n> changing the value without restart, doesn't it follow that SIGHUP is what's\n> wanted ?\n>\n> > > I've triple checked the behavior using a patch I submitted for Thomas' syncfs\n> > > feature. ALTER SYSTEM recovery_init_sync_method=syncfs was not picked up when\n> > > I sent SIGABRT. But with my patch, if I also do SELECT pg_reload_conf(), then\n> > > a future crash uses syncfs.\n> > > https://www.postgresql.org/message-id/20210526001359.GE3676@telsasoft.com\n> >\n> > The difference is since the behaviour is compared between\n> > PGC_POSTMASTER and PGC_SIGHUP.\n> >\n> > > The GUC definitely isn't SUSET, since it's not useful to write in a (super)\n> > > user session SET log_min_duration_startup_process=123.\n> > I agree with this. I may have to change this value as setting in a\n> > user session is not at all useful. But I am confused between\n> > PGC_POSTMASTER and PGC_SIGHUP. We should use PGC_SIGHUP if we would\n> > like to allow the change during restart after a crash. Otherwise\n> > PGC_POSTMASTER would be sufficient. Kindly share your thoughts.\n> >\n> > Thanks & Regards,\n> > Nitin Jadhav\n> >\n> > On Wed, Jun 9, 2021 at 9:49 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > On Wed, Jun 09, 2021 at 05:09:54PM +0530, Nitin Jadhav wrote:\n> > > > > + {\"log_min_duration_startup_process\", PGC_SUSET, LOGGING_WHEN,\n> > > > >\n> > > > > I think it should be PGC_SIGHUP, to allow changing it during runtime.\n> > > > > Obviously it has no effect except during startup, but the change will be\n> > > > > effective if the current process crashes.\n> > > > > See also: https://www.postgresql.org/message-id/20210526001359.GE3676@telsasoft.com\n> > > >\n> > > > I did not get exactly how it will change behaviour. In my\n> > > > understanding, when the server restarts after a crash, it fetches the\n> > > > value from the config file. So if there is any change that gets\n> > > > affected. Kindly correct me if I am wrong.\n> > >\n> > > I don't think so. I checked and SelectConfigFiles is called only once to read\n> > > config files and cmdline args. And not called on restart_after_crash.\n> > >\n> > > The GUC definitely isn't SUSET, since it's not useful to write in a (super)\n> > > user session SET log_min_duration_startup_process=123.\n> > >\n> > > I've triple checked the behavior using a patch I submitted for Thomas' syncfs\n> > > feature. ALTER SYSTEM recovery_init_sync_method=syncfs was not picked up when\n> > > I sent SIGABRT. But with my patch, if I also do SELECT pg_reload_conf(), then\n> > > a future crash uses syncfs.\n> > > https://www.postgresql.org/message-id/20210526001359.GE3676@telsasoft.com", "msg_date": "Thu, 17 Jun 2021 16:57:08 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't" }, { "msg_contents": "+ * Codes of the operations performed during startup process\n+ */\n+typedef enum StartupProcessOp\n+{\n+ SYNCFS_IN_PROGRESS,\n+ FSYNC_IN_PROGRESS,\n+ RECOVERY_IN_PROGRESS,\n+ RESET_UNLOGGED_REL_IN_PROGRESS,\n+ DUMMY,\n+ SYNCFS_END,\n+ FSYNC_END,\n+ RECOVERY_END,\n+ RESET_UNLOGGED_REL_END\n+} StartupProcessOp;\n\nWhat is DUMMY about ? If you just want to separate the \"start\" from \"end\",\nyou could write:\n\n/* codes for start of operations */\nFSYNC_IN_PROGRESS\nSYNCFS_IN_PROGRESS\n...\n/* codes for end of operations */\nFSYNC_END\nSYNCFS_END\n...\n\nOr group them together like:\n\nFSYNC_IN_PROGRESS,\nFSYNC_END,\nSYNCFS_IN_PROGRESS, \nSYNCFS_END,\nRECOVERY_IN_PROGRESS,\nRECOVERY_END,\nRESET_UNLOGGED_REL_IN_PROGRESS,\nRESET_UNLOGGED_REL_END,\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 17 Jun 2021 07:52:54 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "> What is DUMMY about ? If you just want to separate the \"start\" from \"end\",\n> you could write:\n>\n> /* codes for start of operations */\n> FSYNC_IN_PROGRESS\n> SYNCFS_IN_PROGRESS\n> ...\n> /* codes for end of operations */\n> FSYNC_END\n> SYNCFS_END\n> ...\n\nThat was by mistake and I have corrected it in the attached patch.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Thu, Jun 17, 2021 at 6:22 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> + * Codes of the operations performed during startup process\n> + */\n> +typedef enum StartupProcessOp\n> +{\n> + SYNCFS_IN_PROGRESS,\n> + FSYNC_IN_PROGRESS,\n> + RECOVERY_IN_PROGRESS,\n> + RESET_UNLOGGED_REL_IN_PROGRESS,\n> + DUMMY,\n> + SYNCFS_END,\n> + FSYNC_END,\n> + RECOVERY_END,\n> + RESET_UNLOGGED_REL_END\n> +} StartupProcessOp;\n>\n> What is DUMMY about ? If you just want to separate the \"start\" from \"end\",\n> you could write:\n>\n> /* codes for start of operations */\n> FSYNC_IN_PROGRESS\n> SYNCFS_IN_PROGRESS\n> ...\n> /* codes for end of operations */\n> FSYNC_END\n> SYNCFS_END\n> ...\n>\n> Or group them together like:\n>\n> FSYNC_IN_PROGRESS,\n> FSYNC_END,\n> SYNCFS_IN_PROGRESS,\n> SYNCFS_END,\n> RECOVERY_IN_PROGRESS,\n> RECOVERY_END,\n> RESET_UNLOGGED_REL_IN_PROGRESS,\n> RESET_UNLOGGED_REL_END,\n>\n> --\n> Justin", "msg_date": "Mon, 21 Jun 2021 12:06:30 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "Few comments for v4 patch:\n\n@@ -7351,6 +7363,8 @@ StartupXLOG(void)\n (errmsg(\"redo starts at %X/%X\",\n LSN_FORMAT_ARGS(ReadRecPtr))));\n\n+ InitStartupProgress();\n+\n /*\n * main redo apply loop\n */\n@@ -7358,6 +7372,8 @@ StartupXLOG(void)\n {\n bool switchedTLI = false;\n\n+ LogStartupProgress(RECOVERY_IN_PROGRESS, NULL);\n+\n #ifdef WAL_DEBUG\n if (XLOG_DEBUG ||\n (rmid == RM_XACT_ID && trace_recovery_messages <= DEBUG2) ||\n@@ -7569,6 +7585,8 @@ StartupXLOG(void)\n * end of main redo apply loop\n */\n\n+ CloseStartupProgress(RECOVERY_END);\n\nI am not sure I am getting the code flow correctly. From CloseStartupProgress()\nnaming it seems, it corresponds to InitStartupProgress() but what it is doing\nis similar to LogStartupProgress(). I think it should be renamed to be inlined\nwith LogStartupProgress(), IMO.\n---\n\n+\n+ /* Return if any invalid operation */\n+ if (operation >= SYNCFS_END)\n+ return;\n....\n+ /* Return if any invalid operation */\n+ if (operation < SYNCFS_END)\n+ return;\n+\n\nThis part should be an assertion, it's the developer's responsibility to call\ncorrectly.\n---\n\n+/*\n+ * Codes of the operations performed during startup process\n+ */\n+typedef enum StartupProcessOp\n+{\n+ /* Codes for in-progress operations */\n+ SYNCFS_IN_PROGRESS,\n+ FSYNC_IN_PROGRESS,\n+ RECOVERY_IN_PROGRESS,\n+ RESET_UNLOGGED_REL_IN_PROGRESS,\n+ /* Codes for end of operations */\n+ SYNCFS_END,\n+ FSYNC_END,\n+ RECOVERY_END,\n+ RESET_UNLOGGED_REL_END\n+} StartupProcessOp;\n+\n\nSince we do have a separate call for the in-progress operation and the\nend-operation, only a single enum would have been enough. If we do this, then I\nthink we should remove get_startup_process_operation_string() move messages to\nthe respective function.\n---\n\nAlso, with your patch \"make check-world\" has few failures, kindly check that.\n\nRegards,\nAmul\n\n\nOn Mon, Jun 21, 2021 at 12:06 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > What is DUMMY about ? If you just want to separate the \"start\" from \"end\",\n> > you could write:\n> >\n> > /* codes for start of operations */\n> > FSYNC_IN_PROGRESS\n> > SYNCFS_IN_PROGRESS\n> > ...\n> > /* codes for end of operations */\n> > FSYNC_END\n> > SYNCFS_END\n> > ...\n>\n> That was by mistake and I have corrected it in the attached patch.\n>\n> Thanks & Regards,\n> Nitin Jadhav\n>\n> On Thu, Jun 17, 2021 at 6:22 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > + * Codes of the operations performed during startup process\n> > + */\n> > +typedef enum StartupProcessOp\n> > +{\n> > + SYNCFS_IN_PROGRESS,\n> > + FSYNC_IN_PROGRESS,\n> > + RECOVERY_IN_PROGRESS,\n> > + RESET_UNLOGGED_REL_IN_PROGRESS,\n> > + DUMMY,\n> > + SYNCFS_END,\n> > + FSYNC_END,\n> > + RECOVERY_END,\n> > + RESET_UNLOGGED_REL_END\n> > +} StartupProcessOp;\n> >\n> > What is DUMMY about ? If you just want to separate the \"start\" from \"end\",\n> > you could write:\n> >\n> > /* codes for start of operations */\n> > FSYNC_IN_PROGRESS\n> > SYNCFS_IN_PROGRESS\n> > ...\n> > /* codes for end of operations */\n> > FSYNC_END\n> > SYNCFS_END\n> > ...\n> >\n> > Or group them together like:\n> >\n> > FSYNC_IN_PROGRESS,\n> > FSYNC_END,\n> > SYNCFS_IN_PROGRESS,\n> > SYNCFS_END,\n> > RECOVERY_IN_PROGRESS,\n> > RECOVERY_END,\n> > RESET_UNLOGGED_REL_IN_PROGRESS,\n> > RESET_UNLOGGED_REL_END,\n> >\n> > --\n> > Justin\n\n\n", "msg_date": "Fri, 9 Jul 2021 11:41:09 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "Hi,\n\nI'd really like to see this enabled by default, say with a default\ninterval of 10 seconds. If it has to be enabled explicitly, most\npeople won't, but I think a lot of people would benefit from knowing\nwhy their system is slow to start up when that sort of thing happens.\nI don't see much downside to having it on by default either, since it\nshouldn't be expensive. I think the GUC's units should be seconds, not\nmilliseconds, though.\n\nI tried starting the server with log_startup_progress_interval=1000\nand then crashing it to see what the output looked like. I got this:\n\n2021-07-09 15:49:55.956 EDT [99033] LOG: all server processes\nterminated; reinitializing\n2021-07-09 15:49:55.970 EDT [99106] LOG: database system was\ninterrupted; last known up at 2021-07-09 15:48:39 EDT\n2021-07-09 15:49:56.499 EDT [99106] LOG: Data directory sync (fsync)\ncomplete after 529.673 ms\n2021-07-09 15:49:56.501 EDT [99106] LOG: database system was not\nproperly shut down; automatic recovery in progress\n2021-07-09 15:49:56.503 EDT [99106] LOG: redo starts at 0/223494A8\n2021-07-09 15:49:57.504 EDT [99106] LOG: Performing crash recovery,\nelapsed time: 1000.373 ms, current LSN: 0/40A3F888\n2021-07-09 15:49:58.505 EDT [99106] LOG: Performing crash recovery,\nelapsed time: 2001.449 ms, current LSN: 0/41F89388\n2021-07-09 15:49:59.505 EDT [99106] LOG: Performing crash recovery,\nelapsed time: 3001.602 ms, current LSN: 0/55745760\n2021-07-09 15:50:00.506 EDT [99106] LOG: Performing crash recovery,\nelapsed time: 4002.677 ms, current LSN: 0/60CB9FE0\n2021-07-09 15:50:01.507 EDT [99106] LOG: Performing crash recovery,\nelapsed time: 5003.808 ms, current LSN: 0/6A2BBE10\n2021-07-09 15:50:02.508 EDT [99106] LOG: Performing crash recovery,\nelapsed time: 6004.916 ms, current LSN: 0/71BA3F90\n2021-07-09 15:50:03.385 EDT [99106] LOG: invalid record length at\n0/76BD80F0: wanted 24, got 0\n2021-07-09 15:50:03.385 EDT [99106] LOG: Crash recovery complete\nafter 6881.834 ms\n2021-07-09 15:50:03.385 EDT [99106] LOG: redo done at 0/76BD80C8\nsystem usage: CPU: user: 2.77 s, system: 3.80 s, elapsed: 6.88 s\n2021-07-09 15:50:04.778 EDT [99033] LOG: database system is ready to\naccept connections\n\nFew observations on this:\n\n- The messages you've added are capitalized, but the ones PostgreSQL\nhas currently are not. You should conform to the existing style.\n\n- The \"crash recovery complete\" message looks redundant with the \"redo\ndone\" message. Also, in my mind, \"redo\" is one particular phase of\ncrash recovery, so it looks really odd that \"crash recovery\" finishes\nfirst and then \"redo\" finishes. I think some thought is needed about\nthe terminology here.\n\n- I'm not clear why I get a message about the data directory fsync but\nnot about resetting unlogged relations. I wasn't really expecting to\nget a message about things that completed in less than the configured\ninterval, although I find that I don't mind having it there either.\nBut then it seems like it should be consistent across the various\nthings we're timing, and resetting unlogged relations should certainly\nbe one of those.\n\n- The way you've coded this has some drift. In a perfect world, I'd\nget a progress report at 1000.00 ms, 2000.00 ms, 3000.00 ms, etc.\nThat's never going to be the case, because there's always going to be\na slightly delay in responding to the timer interrupt. However, as\nyou've coded it, the delay increases over time. The first \"Performing\ncrash recovery\" message is only 373 ms late, but the last one is 4916\nms late. To avoid this, you want to reschedule the timer interrupt\nbased on the time the last one was supposed to fire, not the time it\nactually did fire. (Exception: If the time it actually did fire is\nbefore the time it was supposed to fire, then use the time it actually\ndid fire instead. This protects against the system clock being set\nbackwards.)\n\n...Robert\n\n\n", "msg_date": "Fri, 9 Jul 2021 16:00:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Mon, Jun 21, 2021 at 12:06 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> That was by mistake and I have corrected it in the attached patch.\n\nThanks for the patch. Here are some comments. Please ignore if I\nrepeat any of the comments given previously, as I didn't look at the\nentire thread.\n\n1) A new line between function return value and the function name:\n+void CloseStartupProgress(StartupProcessOp operation)\n+{\nLike below:\n+void\n+CloseStartupProgress(StartupProcessOp operation)\n+{\n\n2) Add an entry in the commit fest, if it's not done yet. That way,\nthe patch gets tested on many platforms.\n\n3) Replace \"zero\" with the number \"0\".\n+ # -1 disables the feature, zero logs all\n\n4) I think we can just rename InitStartupProgress to\nEnableStartupProgress or EnableStartupOpProgress to be more in sync\nwith what it does.\n+/*\n+ * Sets the start timestamp of the current operation and also enables the\n+ * timeout for logging the progress of startup process.\n+ */\n+void\n+InitStartupProgress(void)\n+{\n\n5) Do we need the GetCurrentOperationStartTimestamp function at all?\nIt does nothing great actually, you might have referred to\nGetCurrentTimestamp which does a good amount of work that qualifies to\nbe inside a function. Can't we just use the operationStartTimestamp\nvariable? Can we rename operationStartTimestamp (I don't think we need\nto specify Timestamp in a variable name) to startup_op_start_time or\nsome other better name?\n+/*\n+ * Fetches the start timestamp of the current operation.\n+ */\n+TimestampTz\n+GetCurrentOperationStartTimestamp(void)\n+{\n\n6) I think you can transform below\n+ /* If the feature is disabled, then no need to proceed further. */\n+ if (log_startup_progress_interval < 0)\n+ return;\nto\n+ /* If the feature is disabled, then no need to proceed further. */\n+ if (log_startup_progress_interval == -1)\n+ return;\nas -1 means feature disabled and values < -1 are not allowed to be set at all.\n\n7) Isn't RECOVERY_IN_PROGRESS supposed to be REDO_IN_PRGRESS? Because,\n\"recovery in progress\" generally applies to the entire startup process\nright? Put it another way, the startup process as a whole does the\nentire recovery, and you have the RECOVERY_IN_PROGRESS in the redo\nphase of the entire startup process.\n\n8) Why do we need to call get_startup_process_operation_string here?\nCan't you directly use the error message text?\nif (operation == RECOVERY_IN_PROGRESS)\nereport(LOG,\n(errmsg(\"%s, elapsed time: %ld.%03d ms, current LSN: %X/%X\",\nget_startup_process_operation_string(operation),\n\n9) Do you need error handling in the default case of\nget_startup_process_operation_string? Instead, how about an assertion,\nAssert(operation >= SYNCFS_IN_PROGRESS && operation <=\nRESET_UNLOGGED_REL_END);?\n default:\n ereport(ERROR,\n (errmsg(\"unrecognized operation (%d) in startup\nprogress update\",\n operation)));\n10) I personally didn't like the name\nget_startup_process_operation_string. How about get_startup_op_string?\n\n11) As pointed out by Robert, the error log message should start with\nsmall letters.\n\"syncing data directory (syncfs)\");\n\"syncing data directory (fsync)\");\n\"performing crash recovery\");\n\"resetting unlogged relations\");\n In general, the error log message should start with small letters and\nnot end with \".\". The detail/hit messages should start with capital\nletters and end with \".\"\n ereport(ERROR,\n (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n errmsg(\"only B-Tree indexes are supported as targets\nfor verification\"),\n errdetail(\"Relation \\\"%s\\\" is not a B-Tree index.\",\n RelationGetRelationName(rel))));\n ereport(ERROR,\n (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n errmsg(\"sslcert and sslkey are superuser-only\"),\n errhint(\"User mappings with the sslcert or\nsslkey options set may only be created or modified by the\nsuperuser\")));\n\n12) How about starting SYNCFS_IN_PROGRESS = 1, and leaving 0 for some\nunknown value?\ntypedef enum StartupProcessOp\n{\n /* Codes for in-progress operations */\n SYNCFS_IN_PROGRESS = 1,\n\n13) Can we combine LogStartupProgress and CloseStartupProgress? Let's\nhave a single function LogStartupProgress(StartupProcessOp operation,\nconst char *path, bool start);, and differentiate the functionality\nwith the start flag.\n\n14) Can we move log_startup_progress_interval declaration from guc.h\nand guc.c to xlog.h and xlog.c? Because it makes more sense to be\nthere, similar to the other GUCs under /* these variables are GUC\nparameters related to XLOG */ in xlog.h.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 10 Jul 2021 13:51:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "> I am not sure I am getting the code flow correctly. From CloseStartupProgress()\n> naming it seems, it corresponds to InitStartupProgress() but what it is doing\n> is similar to LogStartupProgress(). I think it should be renamed to be inlined\n> with LogStartupProgress(), IMO.\n\nRenamed CloseStartupProgress() to LogStartupProgressComplete().\n\n> This part should be an assertion, it's the developer's responsibility to call\n> correctly.\n\nThis code is not required at all due to the fix of the next comment.\n\n> Since we do have a separate call for the in-progress operation and the\n> end-operation, only a single enum would have been enough. If we do this, then I\n> think we should remove get_startup_process_operation_string() move messages to\n> the respective function.\n\nFixed.\n\n> I'd really like to see this enabled by default, say with a default\n> interval of 10 seconds. If it has to be enabled explicitly, most\n> people won't, but I think a lot of people would benefit from knowing\n> why their system is slow to start up when that sort of thing happens.\n> I don't see much downside to having it on by default either, since it\n> shouldn't be expensive. I think the GUC's units should be seconds, not\n> milliseconds, though.\n\nI agree that it is better to enable it by default. Changed the code\naccordingly and changed the GUC's units to seconds.\n\n> The messages you've added are capitalized, but the ones PostgreSQL\n> has currently are not. You should conform to the existing style.\n\nFixed.\n\n> The \"crash recovery complete\" message looks redundant with the \"redo\n> done\" message. Also, in my mind, \"redo\" is one particular phase of\n> crash recovery, so it looks really odd that \"crash recovery\" finishes\n> first and then \"redo\" finishes. I think some thought is needed about\n> the terminology here.\n\nYes. \"redo\" is one phase of the crash recovery. Even \"resetting\nunlogged relations\" is also a part of the crash recovery. These 2 are\nthe major time consuming operations of the crash recovery task. There\nis a separate log message to indicate the progress of \"resetting the\nunlogged relations\". So instead of saying 'performing crash recovery\",\nit is better to say \"redo in progress\" and not add any additional\nmessage at the end of redo, instead retain the existing message to\navoid redundancy.\n\n> I'm not clear why I get a message about the data directory fsync but\n> not about resetting unlogged relations. I wasn't really expecting to\n> get a message about things that completed in less than the configured\n> interval, although I find that I don't mind having it there either.\n> But then it seems like it should be consistent across the various\n> things we're timing, and resetting unlogged relations should certainly\n> be one of those.\n\nIt is the same across all the things we'are timing. I was able to see\nthose messages on my machine. I feel there is not much overhead of\nlogging one message at the end of the operation even though it\ncompletes within the configured interval. Following are the log\nmessages shown on my machine.\n\n2021-07-20 18:47:32.046 IST [102230] LOG: listening on IPv4 address\n\"127.0.0.1\", port 5445\n2021-07-20 18:47:32.048 IST [102230] LOG: listening on Unix socket\n\"/tmp/.s.PGSQL.5445\"\n2021-07-20 18:47:32.051 IST [102234] LOG: database system was\ninterrupted; last known up at 2021-07-20 18:46:27 IST\n2021-07-20 18:47:32.060 IST [102234] LOG: data directory sync (fsync)\ncomplete after 0.00 s\n2021-07-20 18:47:32.060 IST [102234] LOG: database system was not\nproperly shut down; automatic recovery in progress\n2021-07-20 18:47:32.063 IST [102234] LOG: unlogged relations reset after 0.00 s\n2021-07-20 18:47:32.063 IST [102234] LOG: redo starts at 0/14EF418\n.2021-07-20 18:47:33.063 IST [102234] LOG: redo in progress, elapsed\ntime: 1.00 s, current LSN: 0/5C13D28\n.2021-07-20 18:47:34.063 IST [102234] LOG: redo in progress, elapsed\ntime: 2.00 s, current LSN: 0/A289160\n.2021-07-20 18:47:35.063 IST [102234] LOG: redo in progress, elapsed\ntime: 3.00 s, current LSN: 0/EE2DE10\n2021-07-20 18:47:35.563 IST [102234] LOG: invalid record length at\n0/115C63E0: wanted 24, got 0\n2021-07-20 18:47:35.563 IST [102234] LOG: redo done at 0/115C63B8\nsystem usage: CPU: user: 3.58 s, system: 0.14 s, elapsed: 3.50 s\n2021-07-20 18:47:35.564 IST [102234] LOG: unlogged relations reset after 0.00 s\n2021-07-20 18:47:35.706 IST [102230] LOG: database system is ready to\naccept connections\n\n\n> The way you've coded this has some drift. In a perfect world, I'd\n> get a progress report at 1000.00 ms, 2000.00 ms, 3000.00 ms, etc.\n> That's never going to be the case, because there's always going to be\n> a slightly delay in responding to the timer interrupt. However, as\n> you've coded it, the delay increases over time. The first \"Performing\n> crash recovery\" message is only 373 ms late, but the last one is 4916\n> ms late. To avoid this, you want to reschedule the timer interrupt\n> based on the time the last one was supposed to fire, not the time it\n> actually did fire. (Exception: If the time it actually did fire is\n> before the time it was supposed to fire, then use the time it actually\n> did fire instead. This protects against the system clock being set\n> backwards.)\n\nI have rescheduled the timer interrupt based on the time the last one\nwas supposed to fire, not the time it actually did fire. Now I am able\nto see the messages when the timer is timed out and it is very close\nto the configured interval. But I did not find a scenario when the\nabove mentioned exception can occur. Kindly let me know if I am wrong\nin the approach.\n\n> 1) A new line between function return value and the function name:\n> +void CloseStartupProgress(StartupProcessOp operation)\n> +{\n> Like below:\n> +void\n> +CloseStartupProgress(StartupProcessOp operation)\n> +{\n\nFixed.\n\n> 2) Add an entry in the commit fest, if it's not done yet. That way,\n> the patch gets tested on many platforms.\n\nI have an entry in the sept commitfest\nhttps://commitfest.postgresql.org/34/3261/\n\n> 3) Replace \"zero\" with the number \"0\".\n> + # -1 disables the feature, zero logs all\n\nFixed.\n\n> 4) I think we can just rename InitStartupProgress to\n> EnableStartupProgress or EnableStartupOpProgress to be more in sync\n> with what it does.\n\nI feel 'Init' is more appropriate than 'enable' here. As it not only\nenables the timer but also initializes some variables. Timer enabling\ncan also be interpreted as initialization. So in common 'init' is\nbetter than 'enable'.\n\n> 5) Do we need the GetCurrentOperationStartTimestamp function at all?\n> It does nothing great actually, you might have referred to\n> GetCurrentTimestamp which does a good amount of work that qualifies to\n> be inside a function. Can't we just use the operationStartTimestamp\n> variable? Can we rename operationStartTimestamp (I don't think we need\n> to specify Timestamp in a variable name) to startup_op_start_time or\n> some other better name?\n\nChanged it to 'startupProcessOpStartTime'.\n\n> 6) I think you can transform below\n> + /* If the feature is disabled, then no need to proceed further. */\n> + if (log_startup_progress_interval < 0)\n> + return;\n> to\n> + /* If the feature is disabled, then no need to proceed further. */\n> + if (log_startup_progress_interval == -1)\n> + return;\n> as -1 means feature disabled and values < -1 are not allowed to be set at all.\n\nI feel that should be ok. As '<0' includes '-1'. So it does our job. I\ncan change it if it is really required to do so.\n\n> 7) Isn't RECOVERY_IN_PROGRESS supposed to be REDO_IN_PRGRESS? Because,\n> \"recovery in progress\" generally applies to the entire startup process\n> right? Put it another way, the startup process as a whole does the\n> entire recovery, and you have the RECOVERY_IN_PROGRESS in the redo\n> phase of the entire startup process.\n\nChanged it as part of the earlier comment's fix. Modified the message\nalso to 'redo in progress' rather than 'recovery in progress'.\n\n> 8) Why do we need to call get_startup_process_operation_string here?\n> Can't you directly use the error message text?\n> if (operation == RECOVERY_IN_PROGRESS)\n> ereport(LOG,\n> (errmsg(\"%s, elapsed time: %ld.%03d ms, current LSN: %X/%X\",\n> get_startup_process_operation_string(operation),\n\nFixed it as part of an earlier comment.\n\n> 9) Do you need error handling in the default case of\n> get_startup_process_operation_string? Instead, how about an assertion,\n> Assert(operation >= SYNCFS_IN_PROGRESS && operation <=\n> RESET_UNLOGGED_REL_END);?\n> default:\n> ereport(ERROR,\n> (errmsg(\"unrecognized operation (%d) in startup\n>progress update\",\n> operation)));\n\nIt is better to have a default case. Assert is difficult to maintain\nif there are any modifications to the operations.\n\n> 10) I personally didn't like the name\n> get_startup_process_operation_string. How about get_startup_op_string?\n\nI have removed it as part of fixing the earlier comment.\n\n> 11) As pointed out by Robert, the error log message should start with\n> small letters.\n> \"syncing data directory (syncfs)\");\n> \"syncing data directory (fsync)\");\n> \"performing crash recovery\");\n> \"resetting unlogged relations\");\n> In general, the error log message should start with small letters and\n> not end with \".\". The detail/hit messages should start with capital\n> letters and end with \".\"\n\nThanks for the information.\n\n> 12) How about starting SYNCFS_IN_PROGRESS = 1, and leaving 0 for some\n> unknown value?\n> typedef enum StartupProcessOp\n> {\n> /* Codes for in-progress operations */\n> SYNCFS_IN_PROGRESS = 1,\n\nI don't find any reason to do so. So not changed. Kindly let me know\nif there is any specific reason which would help changing it.\n\n> 13) Can we combine LogStartupProgress and CloseStartupProgress? Let's\n> have a single function LogStartupProgress(StartupProcessOp operation,\n> const char *path, bool start);, and differentiate the functionality\n> with the start flag.\n\nThe function becomes complex and it will affect the readability.\n\n> 14) Can we move log_startup_progress_interval declaration from guc.h\n> and guc.c to xlog.h and xlog.c? Because it makes more sense to be\n> there, similar to the other GUCs under /* these variables are GUC\n> parameters related to XLOG */ in xlog.h.\n\nFixed.\n\nPlease find the v5 patch attached. Kindly let me know your comments.\n\n\n\n\n\n\nOn Sat, Jul 10, 2021 at 1:51 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Jun 21, 2021 at 12:06 PM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> > That was by mistake and I have corrected it in the attached patch.\n>\n> Thanks for the patch. Here are some comments. Please ignore if I\n> repeat any of the comments given previously, as I didn't look at the\n> entire thread.\n>\n> 1) A new line between function return value and the function name:\n> +void CloseStartupProgress(StartupProcessOp operation)\n> +{\n> Like below:\n> +void\n> +CloseStartupProgress(StartupProcessOp operation)\n> +{\n>\n> 2) Add an entry in the commit fest, if it's not done yet. That way,\n> the patch gets tested on many platforms.\n>\n> 3) Replace \"zero\" with the number \"0\".\n> + # -1 disables the feature, zero logs all\n>\n> 4) I think we can just rename InitStartupProgress to\n> EnableStartupProgress or EnableStartupOpProgress to be more in sync\n> with what it does.\n> +/*\n> + * Sets the start timestamp of the current operation and also enables the\n> + * timeout for logging the progress of startup process.\n> + */\n> +void\n> +InitStartupProgress(void)\n> +{\n>\n> 5) Do we need the GetCurrentOperationStartTimestamp function at all?\n> It does nothing great actually, you might have referred to\n> GetCurrentTimestamp which does a good amount of work that qualifies to\n> be inside a function. Can't we just use the operationStartTimestamp\n> variable? Can we rename operationStartTimestamp (I don't think we need\n> to specify Timestamp in a variable name) to startup_op_start_time or\n> some other better name?\n> +/*\n> + * Fetches the start timestamp of the current operation.\n> + */\n> +TimestampTz\n> +GetCurrentOperationStartTimestamp(void)\n> +{\n>\n> 6) I think you can transform below\n> + /* If the feature is disabled, then no need to proceed further. */\n> + if (log_startup_progress_interval < 0)\n> + return;\n> to\n> + /* If the feature is disabled, then no need to proceed further. */\n> + if (log_startup_progress_interval == -1)\n> + return;\n> as -1 means feature disabled and values < -1 are not allowed to be set at all.\n>\n> 7) Isn't RECOVERY_IN_PROGRESS supposed to be REDO_IN_PRGRESS? Because,\n> \"recovery in progress\" generally applies to the entire startup process\n> right? Put it another way, the startup process as a whole does the\n> entire recovery, and you have the RECOVERY_IN_PROGRESS in the redo\n> phase of the entire startup process.\n>\n> 8) Why do we need to call get_startup_process_operation_string here?\n> Can't you directly use the error message text?\n> if (operation == RECOVERY_IN_PROGRESS)\n> ereport(LOG,\n> (errmsg(\"%s, elapsed time: %ld.%03d ms, current LSN: %X/%X\",\n> get_startup_process_operation_string(operation),\n>\n> 9) Do you need error handling in the default case of\n> get_startup_process_operation_string? Instead, how about an assertion,\n> Assert(operation >= SYNCFS_IN_PROGRESS && operation <=\n> RESET_UNLOGGED_REL_END);?\n> default:\n> ereport(ERROR,\n> (errmsg(\"unrecognized operation (%d) in startup\n> progress update\",\n> operation)));\n> 10) I personally didn't like the name\n> get_startup_process_operation_string. How about get_startup_op_string?\n>\n> 11) As pointed out by Robert, the error log message should start with\n> small letters.\n> \"syncing data directory (syncfs)\");\n> \"syncing data directory (fsync)\");\n> \"performing crash recovery\");\n> \"resetting unlogged relations\");\n> In general, the error log message should start with small letters and\n> not end with \".\". The detail/hit messages should start with capital\n> letters and end with \".\"\n> ereport(ERROR,\n> (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> errmsg(\"only B-Tree indexes are supported as targets\n> for verification\"),\n> errdetail(\"Relation \\\"%s\\\" is not a B-Tree index.\",\n> RelationGetRelationName(rel))));\n> ereport(ERROR,\n> (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> errmsg(\"sslcert and sslkey are superuser-only\"),\n> errhint(\"User mappings with the sslcert or\n> sslkey options set may only be created or modified by the\n> superuser\")));\n>\n> 12) How about starting SYNCFS_IN_PROGRESS = 1, and leaving 0 for some\n> unknown value?\n> typedef enum StartupProcessOp\n> {\n> /* Codes for in-progress operations */\n> SYNCFS_IN_PROGRESS = 1,\n>\n> 13) Can we combine LogStartupProgress and CloseStartupProgress? Let's\n> have a single function LogStartupProgress(StartupProcessOp operation,\n> const char *path, bool start);, and differentiate the functionality\n> with the start flag.\n>\n> 14) Can we move log_startup_progress_interval declaration from guc.h\n> and guc.c to xlog.h and xlog.c? Because it makes more sense to be\n> there, similar to the other GUCs under /* these variables are GUC\n> parameters related to XLOG */ in xlog.h.\n>\n> Regards,\n> Bharath Rupireddy.", "msg_date": "Wed, 21 Jul 2021 12:52:24 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Wed, Jul 21, 2021 at 12:52 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> Please find the v5 patch attached. Kindly let me know your comments.\n\nThanks for the patch. Here are some comments on v5:\n1) I still don't see the need for two functions LogStartupProgress and\nLogStartupProgressComplete. Most of the code is duplicate. I think we\ncan just do it with a single function something like [1]:\n\n2) Why isn't there a\nLogStartupProgressComplete(STARTUP_PROCESS_OP_REDO)? Is it because of\nthe below existing log message?\nereport(LOG,\n(errmsg(\"redo done at %X/%X system usage: %s\",\nLSN_FORMAT_ARGS(ReadRecPtr),\npg_rusage_show(&ru0))));\n\n3) I think it should be, \",\" after occurred instead of \".\"\n+ * elapsed or not. TRUE if timeout occurred, FALSE otherwise.\ninstead of\n+ * elapsed or not. TRUE if timeout occurred. FALSE otherwise.\n\n[1]\n+/*\n+ * Logs the progress of the operations performed during the startup process.\n+ * is_complete true/false indicates that the operation is finished/\n+ * in-progress respectively.\n+ */\n+void\n+LogStartupProgress(StartupProcessOp op, const char *path,\n+ bool is_complete)\n+{\n+ long secs;\n+ int usecs;\n+ int elapsed_ms;\n+ int interval_in_ms;\n+\n+ /* If not called from the startup process then this feature is\nof no use. */\n+ if (!AmStartupProcess())\n+ return;\n+\n+ /* If the feature is disabled, then no need to proceed further. */\n+ if (log_startup_progress_interval < 0)\n+ return;\n+\n+ /*\n+ * If the operation is in-progress and the timeout hasn't occurred, then\n+ * no need to log the details.\n+ */\n+ if (!is_complete && !logStartupProgressTimeout)\n+ return;\n+\n+ /* Timeout has occurred. */\n+ TimestampDifference(startupProcessOpStartTime,\n+ GetCurrentTimestamp(),\n+ &secs, &usecs);\n+\n+ /*\n+ * If the operation is in-progress, enable the timer for the next log\n+ * message based on the time that current log message timer\nwas supposed to\n+ * fire.\n+ */\n+ if (!is_complete)\n+ {\n+ elapsed_ms = (secs * 1000) + (usecs / 1000);\n+ interval_in_ms = log_startup_progress_interval * 1000;\n+ enable_timeout_after(LOG_STARTUP_PROGRESS_TIMEOUT,\n+\n(interval_in_ms - (elapsed_ms % interval_in_ms)));\n+ }\n+\n+ switch(op)\n+ {\n+ case STARTUP_PROCESS_OP_SYNCFS:\n+ {\n+ if (is_complete)\n+ ereport(LOG,\n+ (errmsg(\"data\ndirectory sync (syncfs) complete after %ld.%02d s\",\n+\n secs, (usecs / 10000))));\n+ else\n+ ereport(LOG,\n+\n(errmsg(\"syncing data directory (syncfs), elapsed time: %ld.%02d s,\ncurrent path: %s\",\n+\n secs, (usecs / 10000), path)));\n+ }\n+ break;\n+ case STARTUP_PROCESS_OP_FSYNC:\n+ {\n+ if (is_complete)\n+ ereport(LOG,\n+ (errmsg(\"data\ndirectory sync (fsync) complete after %ld.%02d s\",\n+\n secs, (usecs / 10000))));\n+ else\n+ ereport(LOG,\n+\n(errmsg(\"syncing data directory (fsync), elapsed time: %ld.%02d s,\ncurrent path: %s\",\n+\n secs, (usecs / 10000), path)));\n+ }\n+ break;\n+ case STARTUP_PROCESS_OP_REDO:\n+ {\n+ /*\n+ * No need to log redo completion\nstatus here, as it will be\n+ * done in the caller.\n+ */\n+ if (!is_complete)\n+ ereport(LOG,\n+ (errmsg(\"redo\nin progress, elapsed time: %ld.%02d s, current LSN: %X/%X\",\n+\n secs, (usecs / 10000), LSN_FORMAT_ARGS(ReadRecPtr))));\n+ }\n+ break;\n+ case STARTUP_PROCESS_OP_RESET_UNLOGGED_REL:\n+ {\n+ if (is_complete)\n+ ereport(LOG,\n+\n(errmsg(\"unlogged relations reset after %ld.%02d s\",\n+\n secs, (usecs / 10000))));\n+ else\n+ ereport(LOG,\n+\n(errmsg(\"resetting unlogged relations, elapsed time: %ld.%02d s,\ncurrent path: %s\",\n+\n secs, (usecs / 10000), path)));\n+ }\n+ break;\n+ default:\n+ ereport(ERROR,\n+ (errmsg(\"unrecognized\noperation (%d) in startup progress update\",\n+ op)));\n+ break;\n+ }\n+\n+ if (is_complete)\n+ disable_timeout(LOG_STARTUP_PROGRESS_TIMEOUT, false);\n+ else\n+ logStartupProgressTimeout = false;\n+}\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 21 Jul 2021 16:47:32 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "I think walkdir() should only call LogStartupProgress(FSYNC_IN_PROGRESS, path);\nwhen action == datadir_fsync_fname.\n\nResetUnloggedRelations() is calling\nResetUnloggedRelationsInTablespaceDir(\"base\", op);\nbefore calling InitStartupProgress().\n\nThis shows StartupXLOG() calling ResetUnloggedRelations() twice.\nShould they both be shown ? If so, maybe they should be distinguished as here:\n\n elog(DEBUG1, \"resetting unlogged relations: cleanup %d init %d\",\n (op & UNLOGGED_RELATION_CLEANUP) != 0,\n (op & UNLOGGED_RELATION_INIT) != 0);\n\nOn Wed, Jul 21, 2021 at 12:52:24PM +0530, Nitin Jadhav wrote:\n> 2021-07-20 18:47:32.046 IST [102230] LOG: listening on IPv4 address \"127.0.0.1\", port 5445\n> 2021-07-20 18:47:32.048 IST [102230] LOG: listening on Unix socket \"/tmp/.s.PGSQL.5445\"\n> 2021-07-20 18:47:32.051 IST [102234] LOG: database system was interrupted; last known up at 2021-07-20 18:46:27 IST\n> 2021-07-20 18:47:32.060 IST [102234] LOG: data directory sync (fsync) complete after 0.00 s\n> 2021-07-20 18:47:32.060 IST [102234] LOG: database system was not properly shut down; automatic recovery in progress\n> 2021-07-20 18:47:32.063 IST [102234] LOG: unlogged relations reset after 0.00 s\n> 2021-07-20 18:47:32.063 IST [102234] LOG: redo starts at 0/14EF418\n> 2021-07-20 18:47:33.063 IST [102234] LOG: redo in progress, elapsed time: 1.00 s, current LSN: 0/5C13D28\n> 2021-07-20 18:47:34.063 IST [102234] LOG: redo in progress, elapsed time: 2.00 s, current LSN: 0/A289160\n> 2021-07-20 18:47:35.063 IST [102234] LOG: redo in progress, elapsed time: 3.00 s, current LSN: 0/EE2DE10\n> 2021-07-20 18:47:35.563 IST [102234] LOG: invalid record length at 0/115C63E0: wanted 24, got 0\n> 2021-07-20 18:47:35.563 IST [102234] LOG: redo done at 0/115C63B8 system usage: CPU: user: 3.58 s, system: 0.14 s, elapsed: 3.50 s\n> 2021-07-20 18:47:35.564 IST [102234] LOG: unlogged relations reset after 0.00 s\n> 2021-07-20 18:47:35.706 IST [102230] LOG: database system is ready to accept connections\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 21 Jul 2021 08:13:26 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "> I still don't see the need for two functions LogStartupProgress and\n> LogStartupProgressComplete. Most of the code is duplicate. I think we\n> can just do it with a single function something like [1]:\n\nInitially I had written a common function for these 2. You can see\nthat in the earlier version of the patch. Later separated it since it\nwas too much for one function. If others also agree to make it common,\nI can make that change.\n\n> Why isn't there a\n> LogStartupProgressComplete(STARTUP_PROCESS_OP_REDO)? Is it because of\n> the below existing log message?\n> ereport(LOG,\n> (errmsg(\"redo done at %X/%X system usage: %s\",\n> LSN_FORMAT_ARGS(ReadRecPtr),\n> pg_rusage_show(&ru0))));\n\nYes. Adding another log message makes it redundant.\n\n> I think it should be, \",\" after occurred instead of \".\"\n> + * elapsed or not. TRUE if timeout occurred, FALSE otherwise.\n> instead of\n> + * elapsed or not. TRUE if timeout occurred. FALSE otherwise.\n\nFixed.\n\n> I think walkdir() should only call LogStartupProgress(FSYNC_IN_PROGRESS, path);\n> when action == datadir_fsync_fname.\n\nI agree and fixed it.\n\n> ResetUnloggedRelations() is calling\n> ResetUnloggedRelationsInTablespaceDir(\"base\", op);\n> before calling InitStartupProgress().\n\nFixed.\n\n> This shows StartupXLOG() calling ResetUnloggedRelations() twice.\n> Should they both be shown ? If so, maybe they should be distinguished as here:\n>\n> elog(DEBUG1, \"resetting unlogged relations: cleanup %d init %d\",\n> (op & UNLOGGED_RELATION_CLEANUP) != 0,\n> (op & UNLOGGED_RELATION_INIT) != 0);\n\nFixed. Added separate codes to distinguish.\n\nPlease find the patch attached.\n\n\n\n\nOn Wed, Jul 21, 2021 at 6:43 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> I think walkdir() should only call LogStartupProgress(FSYNC_IN_PROGRESS, path);\n> when action == datadir_fsync_fname.\n>\n> ResetUnloggedRelations() is calling\n> ResetUnloggedRelationsInTablespaceDir(\"base\", op);\n> before calling InitStartupProgress().\n>\n> This shows StartupXLOG() calling ResetUnloggedRelations() twice.\n> Should they both be shown ? If so, maybe they should be distinguished as here:\n>\n> elog(DEBUG1, \"resetting unlogged relations: cleanup %d init %d\",\n> (op & UNLOGGED_RELATION_CLEANUP) != 0,\n> (op & UNLOGGED_RELATION_INIT) != 0);\n>\n> On Wed, Jul 21, 2021 at 12:52:24PM +0530, Nitin Jadhav wrote:\n> > 2021-07-20 18:47:32.046 IST [102230] LOG: listening on IPv4 address \"127.0.0.1\", port 5445\n> > 2021-07-20 18:47:32.048 IST [102230] LOG: listening on Unix socket \"/tmp/.s.PGSQL.5445\"\n> > 2021-07-20 18:47:32.051 IST [102234] LOG: database system was interrupted; last known up at 2021-07-20 18:46:27 IST\n> > 2021-07-20 18:47:32.060 IST [102234] LOG: data directory sync (fsync) complete after 0.00 s\n> > 2021-07-20 18:47:32.060 IST [102234] LOG: database system was not properly shut down; automatic recovery in progress\n> > 2021-07-20 18:47:32.063 IST [102234] LOG: unlogged relations reset after 0.00 s\n> > 2021-07-20 18:47:32.063 IST [102234] LOG: redo starts at 0/14EF418\n> > 2021-07-20 18:47:33.063 IST [102234] LOG: redo in progress, elapsed time: 1.00 s, current LSN: 0/5C13D28\n> > 2021-07-20 18:47:34.063 IST [102234] LOG: redo in progress, elapsed time: 2.00 s, current LSN: 0/A289160\n> > 2021-07-20 18:47:35.063 IST [102234] LOG: redo in progress, elapsed time: 3.00 s, current LSN: 0/EE2DE10\n> > 2021-07-20 18:47:35.563 IST [102234] LOG: invalid record length at 0/115C63E0: wanted 24, got 0\n> > 2021-07-20 18:47:35.563 IST [102234] LOG: redo done at 0/115C63B8 system usage: CPU: user: 3.58 s, system: 0.14 s, elapsed: 3.50 s\n> > 2021-07-20 18:47:35.564 IST [102234] LOG: unlogged relations reset after 0.00 s\n> > 2021-07-20 18:47:35.706 IST [102230] LOG: database system is ready to accept connections\n>\n> --\n> Justin", "msg_date": "Fri, 23 Jul 2021 16:09:47 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Fri, Jul 23, 2021 at 04:09:47PM +0530, Nitin Jadhav wrote:\n> > I think walkdir() should only call LogStartupProgress(FSYNC_IN_PROGRESS, path);\n> > when action == datadir_fsync_fname.\n> \n> I agree and fixed it.\n\nI saw that you fixed it by calling InitStartupProgress() after the walkdir()\ncalls which do pre_sync_fname. So then walkdir is calling\nLogStartupProgress(STARTUP_PROCESS_OP_FSYNC) even when it's not doing fsync,\nand then LogStartupProgress() is returning because !AmStartupProcess().\n\nThat seems indirect, fragile, and confusing. I suggest that walkdir() should\ntake an argument for which operation to pass to LogStartupProgress(). You can\npass a special enum for cases where nothing should be logged, like\nSTARTUP_PROCESS_OP_NONE.\n\nOn Wed, Jul 21, 2021 at 04:47:32PM +0530, Bharath Rupireddy wrote:\n> 1) I still don't see the need for two functions LogStartupProgress and\n> LogStartupProgressComplete. Most of the code is duplicate. I think we\n> can just do it with a single function something like [1]:\n\nI agree that one function can do this more succinctly. I think it's best to\nuse a separate enum value for START operations and END operations.\n\n switch(operation)\n {\n case STARTUP_PROCESS_OP_SYNCFS_START:\n ereport(...);\n break;\n\n case STARTUP_PROCESS_OP_SYNCFS_END:\n ereport(...);\n break;\n\n case STARTUP_PROCESS_OP_FSYNC_START:\n ereport(...);\n break;\n\n case STARTUP_PROCESS_OP_FSYNC_END:\n ereport(...);\n break;\n\n ...\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 25 Jul 2021 12:56:54 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Sun, Jul 25, 2021 at 1:56 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Fri, Jul 23, 2021 at 04:09:47PM +0530, Nitin Jadhav wrote:\n> > > I think walkdir() should only call LogStartupProgress(FSYNC_IN_PROGRESS, path);\n> > > when action == datadir_fsync_fname.\n> >\n> > I agree and fixed it.\n>\n> I saw that you fixed it by calling InitStartupProgress() after the walkdir()\n> calls which do pre_sync_fname. So then walkdir is calling\n> LogStartupProgress(STARTUP_PROCESS_OP_FSYNC) even when it's not doing fsync,\n> and then LogStartupProgress() is returning because !AmStartupProcess().\n>\n> That seems indirect, fragile, and confusing. I suggest that walkdir() should\n> take an argument for which operation to pass to LogStartupProgress(). You can\n> pass a special enum for cases where nothing should be logged, like\n> STARTUP_PROCESS_OP_NONE.\n\nI don't think walkdir() has any business calling LogStartupProgress()\nat all. It's supposed to be a generic function, not one that is only\navailable to be called from the startup process, or has different\nbehavior there. From my point of view, the right thing is to put the\nlogging calls into the particular callbacks that SyncDataDirectory()\nuses.\n\n> On Wed, Jul 21, 2021 at 04:47:32PM +0530, Bharath Rupireddy wrote:\n> > 1) I still don't see the need for two functions LogStartupProgress and\n> > LogStartupProgressComplete. Most of the code is duplicate. I think we\n> > can just do it with a single function something like [1]:\n>\n> I agree that one function can do this more succinctly. I think it's best to\n> use a separate enum value for START operations and END operations.\n\nMaybe I'm missing something here, but I don't understand the purpose\nof this. You can always combine two functions into one, but it's only\nworth doing if you end up with less code, which doesn't seem to be the\ncase here. The strings are all different and that's most of the\nfunction, and the other stuff that gets done isn't the same either, so\nyou'd just end up with a bunch of if-statements. That doesn't seem\nlike an improvement.\n\nThinking further, I guess I understand it from the caller's\nperspective. It's not necessarily clear why in some places we call\nLogStartupProgress() and others LogStartupProgressComplete(). Someone\nmight expect a function with \"complete\" in the name like that to only\nbe called once at the very end, rather than once at the end of a\nphase, and it does sort of make sense that you'd want to call one\nfunction everywhere rather than sometimes one and sometimes the other\n... but I'm not exactly sure how to get there from here. Having only\nLogStartupProgress() but having it do a giant if-statement to figure\nout whether we're mid-phase or end-of-phase does not seem like the\nright approach.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 26 Jul 2021 10:13:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Mon, Jul 26, 2021 at 10:13:09AM -0400, Robert Haas wrote:\n> I don't think walkdir() has any business calling LogStartupProgress()\n> at all. It's supposed to be a generic function, not one that is only\n> available to be called from the startup process, or has different\n> behavior there. From my point of view, the right thing is to put the\n> logging calls into the particular callbacks that SyncDataDirectory()\n> uses.\n\nYou're right - this is better.\n\nOn Wed, Jul 21, 2021 at 04:47:32PM +0530, Bharath Rupireddy wrote:\n> > > 1) I still don't see the need for two functions LogStartupProgress and\n> > > LogStartupProgressComplete. Most of the code is duplicate. I think we\n> > > can just do it with a single function something like [1]:\n> >\n> > I agree that one function can do this more succinctly. I think it's best to\n> > use a separate enum value for START operations and END operations.\n> \n> Maybe I'm missing something here, but I don't understand the purpose\n> of this. You can always combine two functions into one, but it's only\n> worth doing if you end up with less code, which doesn't seem to be the\n> case here.\n\n 4 files changed, 39 insertions(+), 71 deletions(-)\n\n> ... but I'm not exactly sure how to get there from here. Having only\n> LogStartupProgress() but having it do a giant if-statement to figure\n> out whether we're mid-phase or end-of-phase does not seem like the\n> right approach.\n\nI used a bool arg and negation to handle within a single switch. Maybe it's\ncleaner to use a separate enum value for each DONE, and set a local done flag.\n\n startup[29675] LOG: database system was interrupted; last known up at 2021-07-26 10:23:04 CDT\n startup[29675] LOG: syncing data directory (fsync), elapsed time: 1.38 s, current path: ./pg_ident.conf\n startup[29675] LOG: data directory sync (fsync) complete after 1.72 s\n startup[29675] LOG: database system was not properly shut down; automatic recovery in progress\n startup[29675] LOG: resetting unlogged relations (cleanup) complete after 0.00 s\n startup[29675] LOG: redo starts at 0/17BE500\n startup[29675] LOG: redo in progress, elapsed time: 1.00 s, current LSN: 0/35D7CB8\n startup[29675] LOG: redo in progress, elapsed time: 2.00 s, current LSN: 0/54A6918\n startup[29675] LOG: redo in progress, elapsed time: 3.00 s, current LSN: 0/7370570\n startup[29675] LOG: redo in progress, elapsed time: 4.00 s, current LSN: 0/924D8A0\n startup[29675] LOG: redo done at 0/9FFFFB8 system usage: CPU: user: 4.28 s, system: 0.15 s, elapsed: 4.44 s\n startup[29675] LOG: resetting unlogged relations (init) complete after 0.03 s\n startup[29675] LOG: checkpoint starting: end-of-recovery immediate\n startup[29675] LOG: checkpoint complete: wrote 9872 buffers (60.3%); 0 WAL file(s) added, 0 removed, 8 recycled; write=0.136 s, sync=0.801 s, total=1.260 s; sync files=21, longest=0.774 s, average=B", "msg_date": "Mon, 26 Jul 2021 10:30:23 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Mon, Jul 26, 2021 at 11:30 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Maybe I'm missing something here, but I don't understand the purpose\n> > of this. You can always combine two functions into one, but it's only\n> > worth doing if you end up with less code, which doesn't seem to be the\n> > case here.\n>\n> 4 files changed, 39 insertions(+), 71 deletions(-)\n\nHmm. I don't completely hate that, but I don't love it either. I don't\nthink the result is any easier to understand than the original, and in\nsome ways it's worse. In particular, you've flattened the separate\ncomments for each of the individual functions into a single-line\ncomment that is more generic than the comments it replaced. You could\nfix that and you'd still be slightly ahead on LOC, but I'm not\nconvinced that it's actually a real win.\n\n> > ... but I'm not exactly sure how to get there from here. Having only\n> > LogStartupProgress() but having it do a giant if-statement to figure\n> > out whether we're mid-phase or end-of-phase does not seem like the\n> > right approach.\n>\n> I used a bool arg and negation to handle within a single switch. Maybe it's\n> cleaner to use a separate enum value for each DONE, and set a local done flag.\n\nIf we're going to go the route of combining the functions, I agree\nthat a Boolean is the way to go; I think we have some existing\nprecedent for 'bool finished' rather than 'bool done'.\n\nBut I kind of wonder if we should have an enum in the first place. It\nfeels like we've got code in a bunch of places that just exists to\ndecide which enum value to use, and then code someplace else that\nturns around and decides which message to produce. That would be\nsensible if we were using the same enum values in lots of places, but\nthat's not the case here. So suppose we just moved the messages to the\nplaces where we're now calling LogStartupProgress() or\nLogStartupProgressComplete()? So something like this:\n\nif (should_report_startup_progress())\n ereport(LOG,\n (errmsg(\"syncing data directory (syncfs), elapsed\ntime: %ld.%02d s, current path: %s\",\n secs, (usecs / 10000), path)));\n\nWell, that doesn't quite work, because \"secs\" and \"usecs\" aren't going\nto exist in the caller. We can fix that easily enough: let's just make\nshould_report_startup_progress take arguments long *secs, int *usecs,\nand bool done. Then the above becomes:\n\nif (should_report_startup_progress(&secs, &usecs, false))\n ereport(LOG,\n (errmsg(\"syncing data directory (syncfs), elapsed\ntime: %ld.%02d s, current path: %s\",\n secs, (usecs / 10000), path)));\n\nAnd if this were the call site that corresponds to\nLogStartupProgressComplete(), we'd replace false with true. Now, the\nonly real disadvantage of this that I can see is that it requires the\ncaller to declare 'secs' and 'usecs', which is not a big deal, but\nmildly annoying perhaps. I think we can do better still with a little\nmacro magic. Suppose we define a macro report_startup_progress(force,\nmsg, ...) that expands to:\n\n{\nlong secs;\nint usecs;\nif (startup_progress_timer_expired(&secs, &usecs, force))\nereport(LOG, errmsg(msg, secs, usecs, ...));\n}\n\nThen the above just becomes this:\n\n report_startup_progress(false, \"syncing data directory (syncfs),\nelapsed time: %ld.%02d s, current path: %s\", path);\n\nThis would have the advantage that the strings we're using would be\npresent in the code that arranges to emit them, instead of being\nremoved to some other module, so I think it would be clearer. It would\nalso have the advantage of making it much easier to add further calls,\nif someone feels they want to do that. You don't have to run around\nand update enums and all the various things that use them, just copy\nand paste the line above and adjust as required.\n\nWith this design, we avoid a lot of \"action at a distance\". We don't\ndefine the message strings in a place far-removed from the code that\nwants to emit them any more. When someone wants a new progress\nmessage, they can just add another call to report_statup_progress()\nwherever it needs to go and they're done. They don't have to go run\nand update the enum and various switch statements. They're just done.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 26 Jul 2021 13:11:31 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "> > > I saw that you fixed it by calling InitStartupProgress() after the walkdir()\n> > > calls which do pre_sync_fname. So then walkdir is calling\n> > > LogStartupProgress(STARTUP_PROCESS_OP_FSYNC) even when it's not doing fsync,\n> > > and then LogStartupProgress() is returning because !AmStartupProcess().\n> >\n> > I don't think walkdir() has any business calling LogStartupProgress()\n> > at all. It's supposed to be a generic function, not one that is only\n> > available to be called from the startup process, or has different\n> > behavior there. From my point of view, the right thing is to put the\n> > logging calls into the particular callbacks that SyncDataDirectory()\n> > uses.\n>\n> You're right - this is better.\n\nI also agree that this is the better place to do it. Hence modified\nthe patch accordingly. The condition \"!AmStartupProcess()\" is added to\ndifferentiate whether the call is done from a startup process or some\nother process. Actually StartupXLOG() gets called in 2 places. one in\nStartupProcessMain() and the other in InitPostgres(). As the logging\nof the startup progress is required only during the startup process\nand not in the other cases, so added the condition to confirm the call\nis from the startup process. I did not find any other way to\ndifferentiate. Kindly let me know if there is any other better\napproach to do this.\n\n> > > Maybe I'm missing something here, but I don't understand the purpose\n> > > of this. You can always combine two functions into one, but it's only\n> > > worth doing if you end up with less code, which doesn't seem to be the\n> > > case here.\n> >\n> > 4 files changed, 39 insertions(+), 71 deletions(-)\n>\n> Hmm. I don't completely hate that, but I don't love it either. I don't\n> think the result is any easier to understand than the original, and in\n> some ways it's worse. In particular, you've flattened the separate\n> comments for each of the individual functions into a single-line\n> comment that is more generic than the comments it replaced. You could\n> fix that and you'd still be slightly ahead on LOC, but I'm not\n> convinced that it's actually a real win.\n\nAs per my understanding there are no changes required wrt this. Hence\nnot done any changes.\n\nPlease find the updated patch attached. Kindly share your comments if any.\n\nOn Mon, Jul 26, 2021 at 10:41 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jul 26, 2021 at 11:30 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > Maybe I'm missing something here, but I don't understand the purpose\n> > > of this. You can always combine two functions into one, but it's only\n> > > worth doing if you end up with less code, which doesn't seem to be the\n> > > case here.\n> >\n> > 4 files changed, 39 insertions(+), 71 deletions(-)\n>\n> Hmm. I don't completely hate that, but I don't love it either. I don't\n> think the result is any easier to understand than the original, and in\n> some ways it's worse. In particular, you've flattened the separate\n> comments for each of the individual functions into a single-line\n> comment that is more generic than the comments it replaced. You could\n> fix that and you'd still be slightly ahead on LOC, but I'm not\n> convinced that it's actually a real win.\n>\n> > > ... but I'm not exactly sure how to get there from here. Having only\n> > > LogStartupProgress() but having it do a giant if-statement to figure\n> > > out whether we're mid-phase or end-of-phase does not seem like the\n> > > right approach.\n> >\n> > I used a bool arg and negation to handle within a single switch. Maybe it's\n> > cleaner to use a separate enum value for each DONE, and set a local done flag.\n>\n> If we're going to go the route of combining the functions, I agree\n> that a Boolean is the way to go; I think we have some existing\n> precedent for 'bool finished' rather than 'bool done'.\n>\n> But I kind of wonder if we should have an enum in the first place. It\n> feels like we've got code in a bunch of places that just exists to\n> decide which enum value to use, and then code someplace else that\n> turns around and decides which message to produce. That would be\n> sensible if we were using the same enum values in lots of places, but\n> that's not the case here. So suppose we just moved the messages to the\n> places where we're now calling LogStartupProgress() or\n> LogStartupProgressComplete()? So something like this:\n>\n> if (should_report_startup_progress())\n> ereport(LOG,\n> (errmsg(\"syncing data directory (syncfs), elapsed\n> time: %ld.%02d s, current path: %s\",\n> secs, (usecs / 10000), path)));\n>\n> Well, that doesn't quite work, because \"secs\" and \"usecs\" aren't going\n> to exist in the caller. We can fix that easily enough: let's just make\n> should_report_startup_progress take arguments long *secs, int *usecs,\n> and bool done. Then the above becomes:\n>\n> if (should_report_startup_progress(&secs, &usecs, false))\n> ereport(LOG,\n> (errmsg(\"syncing data directory (syncfs), elapsed\n> time: %ld.%02d s, current path: %s\",\n> secs, (usecs / 10000), path)));\n>\n> And if this were the call site that corresponds to\n> LogStartupProgressComplete(), we'd replace false with true. Now, the\n> only real disadvantage of this that I can see is that it requires the\n> caller to declare 'secs' and 'usecs', which is not a big deal, but\n> mildly annoying perhaps. I think we can do better still with a little\n> macro magic. Suppose we define a macro report_startup_progress(force,\n> msg, ...) that expands to:\n>\n> {\n> long secs;\n> int usecs;\n> if (startup_progress_timer_expired(&secs, &usecs, force))\n> ereport(LOG, errmsg(msg, secs, usecs, ...));\n> }\n>\n> Then the above just becomes this:\n>\n> report_startup_progress(false, \"syncing data directory (syncfs),\n> elapsed time: %ld.%02d s, current path: %s\", path);\n>\n> This would have the advantage that the strings we're using would be\n> present in the code that arranges to emit them, instead of being\n> removed to some other module, so I think it would be clearer. It would\n> also have the advantage of making it much easier to add further calls,\n> if someone feels they want to do that. You don't have to run around\n> and update enums and all the various things that use them, just copy\n> and paste the line above and adjust as required.\n>\n> With this design, we avoid a lot of \"action at a distance\". We don't\n> define the message strings in a place far-removed from the code that\n> wants to emit them any more. When someone wants a new progress\n> message, they can just add another call to report_statup_progress()\n> wherever it needs to go and they're done. They don't have to go run\n> and update the enum and various switch statements. They're just done.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com", "msg_date": "Wed, 28 Jul 2021 14:54:46 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Wed, Jul 28, 2021 at 5:24 AM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> I also agree that this is the better place to do it. Hence modified\n> the patch accordingly. The condition \"!AmStartupProcess()\" is added to\n> differentiate whether the call is done from a startup process or some\n> other process. Actually StartupXLOG() gets called in 2 places. one in\n> StartupProcessMain() and the other in InitPostgres(). As the logging\n> of the startup progress is required only during the startup process\n> and not in the other cases,\n\nThe InitPostgres() case occurs when the server is started in bootstrap\nmode (during initdb) or in single-user mode (postgres --single). I do\nnot see any reason why we shouldn't produce progress messages in at\nleast the latter case. I suspect that someone who is in the rather\ndesperate scenario of having to use single-user mode would really like\nto know how long the server is going to take to start up.\n\nPerhaps during initdb we don't want messages, but I'm not sure that we\nneed to do anything about that here. None of the messages that the\nserver normally produces show up when you run initdb, so I guess they\nare getting redirected to /dev/null or something.\n\nSo I don't think that using AmStartupProcess() for this purpose is right.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 28 Jul 2021 09:32:16 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Wed, Jul 28, 2021 at 7:02 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Jul 28, 2021 at 5:24 AM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> > I also agree that this is the better place to do it. Hence modified\n> > the patch accordingly. The condition \"!AmStartupProcess()\" is added to\n> > differentiate whether the call is done from a startup process or some\n> > other process. Actually StartupXLOG() gets called in 2 places. one in\n> > StartupProcessMain() and the other in InitPostgres(). As the logging\n> > of the startup progress is required only during the startup process\n> > and not in the other cases,\n>\n> The InitPostgres() case occurs when the server is started in bootstrap\n> mode (during initdb) or in single-user mode (postgres --single). I do\n> not see any reason why we shouldn't produce progress messages in at\n> least the latter case. I suspect that someone who is in the rather\n> desperate scenario of having to use single-user mode would really like\n> to know how long the server is going to take to start up.\n\n+1 to emit the same log messages in single-user mode and basically\nwhoever calls StartupXLOG. Do we need to adjust the GUC parameter\nlog_startup_progress_interval(a reasonable value) so that the logs are\ngenerated by default?\n\n> Perhaps during initdb we don't want messages, but I'm not sure that we\n> need to do anything about that here. None of the messages that the\n> server normally produces show up when you run initdb, so I guess they\n> are getting redirected to /dev/null or something.\n\nI enabled the below log message in XLogFlush and ran initdb, it is\nprinting these logs onto the stdout, looks like the logs have not been\nredirected to /dev/null in initdb/bootstrap mode.\n\n#ifdef WAL_DEBUG\nif (XLOG_DEBUG)\nelog(LOG, \"xlog flush request %X/%X; write %X/%X; flush %X/%X\",\nLSN_FORMAT_ARGS(record),\nLSN_FORMAT_ARGS(LogwrtResult.Write),\nLSN_FORMAT_ARGS(LogwrtResult.Flush));\n#endif\n\nSo, even in bootstrap mode, can we use the above #ifdef WAL_DEBUG and\nXLOG_DEBUG to print those logs? It looks like the problem with these\nmacros is that they are not settable by normal users in the production\nenvironment, but only by the developers. I'm not sure how much it is\nhelpful to print the startup process progress logs in bootstrap mode.\nMaybe we can use the IsBootstrapProcessingMode macro to disable these\nlogs in the bootstrap mode.\n\n> So I don't think that using AmStartupProcess() for this purpose is right.\n\nAgree.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 28 Jul 2021 20:55:27 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Wed, Jul 28, 2021 at 11:25 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Perhaps during initdb we don't want messages, but I'm not sure that we\n> > need to do anything about that here. None of the messages that the\n> > server normally produces show up when you run initdb, so I guess they\n> > are getting redirected to /dev/null or something.\n>\n> I enabled the below log message in XLogFlush and ran initdb, it is\n> printing these logs onto the stdout, looks like the logs have not been\n> redirected to /dev/null in initdb/bootstrap mode.\n>\n> #ifdef WAL_DEBUG\n> if (XLOG_DEBUG)\n> elog(LOG, \"xlog flush request %X/%X; write %X/%X; flush %X/%X\",\n> LSN_FORMAT_ARGS(record),\n> LSN_FORMAT_ARGS(LogwrtResult.Write),\n> LSN_FORMAT_ARGS(LogwrtResult.Flush));\n> #endif\n>\n> So, even in bootstrap mode, can we use the above #ifdef WAL_DEBUG and\n> XLOG_DEBUG to print those logs? It looks like the problem with these\n> macros is that they are not settable by normal users in the production\n> environment, but only by the developers. I'm not sure how much it is\n> helpful to print the startup process progress logs in bootstrap mode.\n> Maybe we can use the IsBootstrapProcessingMode macro to disable these\n> logs in the bootstrap mode.\n\nI don't think we should drag XLOG_DEBUG into this. That's a debugging\nfacility that isn't really relevant to the topic at hand. The point is\nthe server normally prints a bunch of messages that we don't see in\nbootstrap mode. For example:\n\n[rhaas pgsql]$ postgres\n2021-07-28 11:32:33.824 EDT [36801] LOG: starting PostgreSQL 15devel\non x86_64-apple-darwin19.6.0, compiled by clang version 5.0.2\n(tags/RELEASE_502/final), 64-bit\n2021-07-28 11:32:33.825 EDT [36801] LOG: listening on IPv6 address\n\"::1\", port 5432\n2021-07-28 11:32:33.825 EDT [36801] LOG: listening on IPv4 address\n\"127.0.0.1\", port 5432\n2021-07-28 11:32:33.826 EDT [36801] LOG: listening on Unix socket\n\"/tmp/.s.PGSQL.5432\"\n2021-07-28 11:32:33.846 EDT [36802] LOG: database system was shut\ndown at 2021-07-28 11:32:32 EDT\n2021-07-28 11:32:33.857 EDT [36801] LOG: database system is ready to\naccept connections\n\nNone of that shows up when you run initdb. Taking a fast look at the\ncode, I don't think the reasons are the same for all of those\nmessages. Some of the code isn't reached, whereas e.g. \"database\nsystem was shut down at 2021-07-28 11:32:32 EDT\" is special-cased. I'm\nnot sure right off the top of my head what this code should do, but\nideally it looks something like one of the cases we've already got.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 28 Jul 2021 11:36:50 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "> The InitPostgres() case occurs when the server is started in bootstrap\n> mode (during initdb) or in single-user mode (postgres --single). I do\n> not see any reason why we shouldn't produce progress messages in at\n> least the latter case. I suspect that someone who is in the rather\n> desperate scenario of having to use single-user mode would really like\n> to know how long the server is going to take to start up.\n\nThanks for sharing the information. I have done the necessary changes\nto show the logs during the latter case (postgres --single) and\nverified the log messages.\n\n> +1 to emit the same log messages in single-user mode and basically\n> whoever calls StartupXLOG. Do we need to adjust the GUC parameter\n> log_startup_progress_interval(a reasonable value) so that the logs are\n> generated by default?\n\nAt present, this feature is enabled by default and the initial value\nset for log_startup_progress_interval is 10 seconds.\n\n\nOn Wed, Jul 28, 2021 at 9:07 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Jul 28, 2021 at 11:25 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > Perhaps during initdb we don't want messages, but I'm not sure that we\n> > > need to do anything about that here. None of the messages that the\n> > > server normally produces show up when you run initdb, so I guess they\n> > > are getting redirected to /dev/null or something.\n> >\n> > I enabled the below log message in XLogFlush and ran initdb, it is\n> > printing these logs onto the stdout, looks like the logs have not been\n> > redirected to /dev/null in initdb/bootstrap mode.\n> >\n> > #ifdef WAL_DEBUG\n> > if (XLOG_DEBUG)\n> > elog(LOG, \"xlog flush request %X/%X; write %X/%X; flush %X/%X\",\n> > LSN_FORMAT_ARGS(record),\n> > LSN_FORMAT_ARGS(LogwrtResult.Write),\n> > LSN_FORMAT_ARGS(LogwrtResult.Flush));\n> > #endif\n> >\n> > So, even in bootstrap mode, can we use the above #ifdef WAL_DEBUG and\n> > XLOG_DEBUG to print those logs? It looks like the problem with these\n> > macros is that they are not settable by normal users in the production\n> > environment, but only by the developers. I'm not sure how much it is\n> > helpful to print the startup process progress logs in bootstrap mode.\n> > Maybe we can use the IsBootstrapProcessingMode macro to disable these\n> > logs in the bootstrap mode.\n>\n> I don't think we should drag XLOG_DEBUG into this. That's a debugging\n> facility that isn't really relevant to the topic at hand. The point is\n> the server normally prints a bunch of messages that we don't see in\n> bootstrap mode. For example:\n>\n> [rhaas pgsql]$ postgres\n> 2021-07-28 11:32:33.824 EDT [36801] LOG: starting PostgreSQL 15devel\n> on x86_64-apple-darwin19.6.0, compiled by clang version 5.0.2\n> (tags/RELEASE_502/final), 64-bit\n> 2021-07-28 11:32:33.825 EDT [36801] LOG: listening on IPv6 address\n> \"::1\", port 5432\n> 2021-07-28 11:32:33.825 EDT [36801] LOG: listening on IPv4 address\n> \"127.0.0.1\", port 5432\n> 2021-07-28 11:32:33.826 EDT [36801] LOG: listening on Unix socket\n> \"/tmp/.s.PGSQL.5432\"\n> 2021-07-28 11:32:33.846 EDT [36802] LOG: database system was shut\n> down at 2021-07-28 11:32:32 EDT\n> 2021-07-28 11:32:33.857 EDT [36801] LOG: database system is ready to\n> accept connections\n>\n> None of that shows up when you run initdb. Taking a fast look at the\n> code, I don't think the reasons are the same for all of those\n> messages. Some of the code isn't reached, whereas e.g. \"database\n> system was shut down at 2021-07-28 11:32:32 EDT\" is special-cased. I'm\n> not sure right off the top of my head what this code should do, but\n> ideally it looks something like one of the cases we've already got.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com", "msg_date": "Thu, 29 Jul 2021 14:26:53 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Thu, Jul 29, 2021 at 4:56 AM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> Thanks for sharing the information. I have done the necessary changes\n> to show the logs during the latter case (postgres --single) and\n> verified the log messages.\n\nThanks. Can you please have a look at what I suggested down toward the\nbottom of http://postgr.es/m/CA+TgmoaP2wEFSktmCgwT9LXuz7Y99HNdUYshpk7qNFuQB98g6g@mail.gmail.com\n?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 29 Jul 2021 12:18:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "> Thanks. Can you please have a look at what I suggested down toward the\n> bottom of http://postgr.es/m/CA+TgmoaP2wEFSktmCgwT9LXuz7Y99HNdUYshpk7qNFuQB98g6g@mail.gmail.com\n> ?\n>\n> If we're going to go the route of combining the functions, I agree\n> that a Boolean is the way to go; I think we have some existing\n> precedent for 'bool finished' rather than 'bool done'.\n>\n> But I kind of wonder if we should have an enum in the first place. It\n> feels like we've got code in a bunch of places that just exists to\n> decide which enum value to use, and then code someplace else that\n> turns around and decides which message to produce. That would be\n> sensible if we were using the same enum values in lots of places, but\n> that's not the case here. So suppose we just moved the messages to the\n> places where we're now calling LogStartupProgress() or\n> LogStartupProgressComplete()? So something like this:\n\nSorry. I thought it is related to the discussion of deciding whether\nLogStartupProgress() and LogStartupProgressComplete() should be\ncombined or not. I feel it's a really nice design. With this we avoid\na \"action at a distance\" issue and its easy to use. If we are\nreporting the same kind of msgs at multiple places then the current\napproach of using enum will be more suitable since we don't have to\nworry about matching the log msg string. But in the current scenario,\nwe are not using the same kind of msgs at multiple places (I feel such\nscenario will not occur in future also. Even if there is similar\noperation, it can be distinguished like resetting unlogged relations\nis distinguished by init and clean. Kindly mention if you can oversee\nany such scenario), hence the approach you are suggesting will be a\nbest suit.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Thu, Jul 29, 2021 at 9:48 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Jul 29, 2021 at 4:56 AM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> > Thanks for sharing the information. I have done the necessary changes\n> > to show the logs during the latter case (postgres --single) and\n> > verified the log messages.\n>\n> Thanks. Can you please have a look at what I suggested down toward the\n> bottom of http://postgr.es/m/CA+TgmoaP2wEFSktmCgwT9LXuz7Y99HNdUYshpk7qNFuQB98g6g@mail.gmail.com\n> ?\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 30 Jul 2021 10:40:36 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "> Thanks. Can you please have a look at what I suggested down toward the\n> bottom of http://postgr.es/m/CA+TgmoaP2wEFSktmCgwT9LXuz7Y99HNdUYshpk7qNFuQB98g6g@mail.gmail.com\n?\n\nImplemented the above approach and verified the patch. Kindly have a\nlook and share your thoughts.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Fri, Jul 30, 2021 at 10:40 AM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > Thanks. Can you please have a look at what I suggested down toward the\n> > bottom of http://postgr.es/m/CA+TgmoaP2wEFSktmCgwT9LXuz7Y99HNdUYshpk7qNFuQB98g6g@mail.gmail.com\n> > ?\n> >\n> > If we're going to go the route of combining the functions, I agree\n> > that a Boolean is the way to go; I think we have some existing\n> > precedent for 'bool finished' rather than 'bool done'.\n> >\n> > But I kind of wonder if we should have an enum in the first place. It\n> > feels like we've got code in a bunch of places that just exists to\n> > decide which enum value to use, and then code someplace else that\n> > turns around and decides which message to produce. That would be\n> > sensible if we were using the same enum values in lots of places, but\n> > that's not the case here. So suppose we just moved the messages to the\n> > places where we're now calling LogStartupProgress() or\n> > LogStartupProgressComplete()? So something like this:\n>\n> Sorry. I thought it is related to the discussion of deciding whether\n> LogStartupProgress() and LogStartupProgressComplete() should be\n> combined or not. I feel it's a really nice design. With this we avoid\n> a \"action at a distance\" issue and its easy to use. If we are\n> reporting the same kind of msgs at multiple places then the current\n> approach of using enum will be more suitable since we don't have to\n> worry about matching the log msg string. But in the current scenario,\n> we are not using the same kind of msgs at multiple places (I feel such\n> scenario will not occur in future also. Even if there is similar\n> operation, it can be distinguished like resetting unlogged relations\n> is distinguished by init and clean. Kindly mention if you can oversee\n> any such scenario), hence the approach you are suggesting will be a\n> best suit.\n>\n> Thanks & Regards,\n> Nitin Jadhav\n>\n> On Thu, Jul 29, 2021 at 9:48 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Thu, Jul 29, 2021 at 4:56 AM Nitin Jadhav\n> > <nitinjadhavpostgres@gmail.com> wrote:\n> > > Thanks for sharing the information. I have done the necessary changes\n> > > to show the logs during the latter case (postgres --single) and\n> > > verified the log messages.\n> >\n> > Thanks. Can you please have a look at what I suggested down toward the\n> > bottom of http://postgr.es/m/CA+TgmoaP2wEFSktmCgwT9LXuz7Y99HNdUYshpk7qNFuQB98g6g@mail.gmail.com\n> > ?\n> >\n> > --\n> > Robert Haas\n> > EDB: http://www.enterprisedb.com", "msg_date": "Tue, 3 Aug 2021 12:18:10 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "Two issues that I saw:\n\nThe first syncfs message is not output, because it's before\nInitStartupProgress() is called:\n\n2021-08-03 07:53:02.176 CDT startup[9717] LOG: database system was interrupted; last known up at 2021-08-03 07:52:15 CDT\n2021-08-03 07:53:02.733 CDT startup[9717] LOG: data directory sync (syncfs) complete after 0.55 s\n2021-08-03 07:53:02.734 CDT startup[9717] LOG: database system was not properly shut down; automatic recovery in progress\n\nFP exception when the GUC is set to 0:\n\n2021-08-03 07:53:02.877 CDT postmaster[9715] LOG: startup process (PID 9717) was terminated by signal 8: Floating point exception\n\nProbably due to mod zero operation.\nThis prevents the process from starting.\n\n+ enable_timeout_after(LOG_STARTUP_PROGRESS_TIMEOUT,\n+ (interval_in_ms - (elapsed_ms % interval_in_ms)));\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 3 Aug 2021 08:10:09 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Tue, Aug 3, 2021 at 2:48 AM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> Implemented the above approach and verified the patch. Kindly have a\n> look and share your thoughts.\n\n+LogStartupProgressTimerExpired(bool force, long *secs, int *usecs)\n\nThe name of this function begins with \"log\" but it does not log\nanything, so that's probably a sign that you should rethink the name\nof the function. I suggested startup_progress_timer_expired()\nupthread, but now I think maybe we should call it\nstartup_progress_timer_has_expired() and then renaming the Boolean\nyou've called logStartupProgressTimeout to\nstartup_progress_timer_expired. Also, the argument \"bool force\"\ndoesn't really make sense for this function, which is why I suggested\nabove calling it \"bool done\" instead.\n\n+ elapsed_ms = (seconds * 1000) + (useconds / 1000);\n+ interval_in_ms = log_startup_progress_interval * 1000;\n+ enable_timeout_after(LOG_STARTUP_PROGRESS_TIMEOUT,\n+ (interval_in_ms - (elapsed_ms % interval_in_ms)));\n\nThis will work correctly only if elapsed_ms is equal to interval_in_ms\nor slightly greater than interval_ms. But if elapsed_ms is greater\nthan two times interval_ms, then it will produce pretty much random\nresults. If elapsed_ms is negative because the system clock gets set\nbackward, a possibility I've already mentioned to you in a previous\nreview, then it will also misbehave. I actually don't think\nenable_timeout_after() is very easy to use for this kind of thing. At\nleast for me, it's way easier to think about calculating the timestamp\nat which I want the timer to expire. Maybe something along these\nlines:\n\nnext_timeout = last_startup_progress_timeout + interval;\nif (next_timeout < now)\n{\n // Either the timeout was processed so late that we missed an entire cycle,\n // or the system clock was set backwards.\n next_timeout = now + interval;\n}\nenable_timeout_at(next_timeout);\n\nAlso, I said before that I thought it was OK that you were logging a\nline at the end of every operation as well as after every N\nmilliseconds. But, the more I think about it, the less I like it. We\nalready have a 'redo done' line that shows up at the end of redo,\nwhich the patch wisely does not duplicate. But it's not quite clear\nthat any of these other things are important enough to bother\nmentioning in the log unless they take a long time. After an immediate\nshutdown of an empty cluster, with this patch applied, I get 3 extra\nlog messages:\n\n2021-08-03 10:17:49.630 EDT [17567] LOG: data directory sync (fsync)\ncomplete after 0.13 s\n2021-08-03 10:17:49.633 EDT [17567] LOG: resetting unlogged relations\n(cleanup) complete after 0.00 s\n2021-08-03 10:17:49.635 EDT [17567] LOG: resetting unlogged relations\n(init) complete after 0.00 s\n\nThat doesn't seem like information anyone really needs. If it had\ntaken a long time, it would have been worth logging, but in the normal\ncase where it doesn't, it's just clutter. Another funny thing is that,\nas you've coded it, those additional log lines only appear when\nlog_startup_progress_interval != 0. That's strange. It seems\nparticularly strange because of the existing precedent where 'redo\ndone' appears regardless of any setting, but also because when I set,\nsay, a 10s interval, I guess I expect something to happen every 10s.\nMaking something happen once at the end is different.\n\nSo I think we should take this out, which would permit simplifying a\nbunch of code.The four places where you call\nereport_startup_progress(true, ...) would go away.\nereport_startup_progress() would no longer need a Boolean argument,\nand neither would LogStartupProgressTimerExpired() /\nstartup_progress_timer_has_expired(). Note that there's no real need\nto disable the timeout when we're done with it. It's fine if we do,\nbut if we don't, it's also not a big deal; all that happens if we\nleave the timer scheduled and let it expire is that it will set a\nBoolean flag that nobody will care about. So what I'm thinking about\nis that we could just have, say, reset_startup_progress_timeout() and\nstartup_progress_timeout_has_expired().\nreset_startup_progress_timeout() would just do exactly what I showed\nabove to reset the timeout, and you'd call it at the beginning of each\nphase. And startup_progress_timeout_has_expired() would look roughly\nlike this:\n\nif (!startup_progress_timer_expired)\n return;\nnow = GetCurrentTimestamp();\n// compute timestamp difference\nlast_startup_progress_timeout = now;\nreset_startup_progress_timeout();\n\nWith these changes you'd have only 1 place in the code that needs to\ncare about log_startup_progress_interval, as opposed to 3 as you have\nit currently, and only one place that enables the timeout, as opposed\nto 2 as you have it currently. I think that would be tidier.\n\nI think we also should consider where to put the new functions\nintroduced by this patch, and the GUC. You put them in xlog.c which is\nreasonable since that is where StartupXLOG() lives. However, xlog.c is\nalso a gigantic file, and xlog.h is included in a lot of places, most\nof which aren't going to care about the new things you're adding to\nthat file at all. So I'm thinking it might make more sense to put the\nnew code in src/backend/postmaster/startup.c. That is actually a\nbetter thematic fit given that this is really about the startup\nprocess specifically, not WAL-logging in general. Then reinit.c would\ninclude startup.h instead of xlog.h, which seems better, because I\ndon't think we want any actual xlog operations to happen from within\nthat file, so better not to be including xlog.h.\n\nThe patch currently lacks documentation. It needs to update config.sgml.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 3 Aug 2021 10:51:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "Thanks for the detailed explanation.\n\n> + elapsed_ms = (seconds * 1000) + (useconds / 1000);\n> + interval_in_ms = log_startup_progress_interval * 1000;\n> + enable_timeout_after(LOG_STARTUP_PROGRESS_TIMEOUT,\n> + (interval_in_ms - (elapsed_ms % interval_in_ms)));\n>\n> This will work correctly only if elapsed_ms is equal to interval_in_ms\n> or slightly greater than interval_ms. But if elapsed_ms is greater\n> than two times interval_ms, then it will produce pretty much random\n> results. If elapsed_ms is negative because the system clock gets set\n> backward, a possibility I've already mentioned to you in a previous\n> review, then it will also misbehave. I actually don't think\n> enable_timeout_after() is very easy to use for this kind of thing. At\n> least for me, it's way easier to think about calculating the timestamp\n> at which I want the timer to expire. Maybe something along these\n> lines:\n>\n> next_timeout = last_startup_progress_timeout + interval;\n> if (next_timeout < now)\n> {\n> // Either the timeout was processed so late that we missed an entire cycle,\n> // or the system clock was set backwards.\n> next_timeout = now + interval;\n> }\n> enable_timeout_at(next_timeout);\n>\n> So I think we should take this out, which would permit simplifying a\n> bunch of code.The four places where you call\n> ereport_startup_progress(true, ...) would go away.\n> ereport_startup_progress() would no longer need a Boolean argument,\n> and neither would LogStartupProgressTimerExpired() /\n> startup_progress_timer_has_expired(). Note that there's no real need\n> to disable the timeout when we're done with it. It's fine if we do,\n> but if we don't, it's also not a big deal; all that happens if we\n> leave the timer scheduled and let it expire is that it will set a\n> Boolean flag that nobody will care about. So what I'm thinking about\n> is that we could just have, say, reset_startup_progress_timeout() and\n> startup_progress_timeout_has_expired().\n> reset_startup_progress_timeout() would just do exactly what I showed\n> above to reset the timeout, and you'd call it at the beginning of each\n> phase. And startup_progress_timeout_has_expired() would look roughly\n> like this:\n>\n> if (!startup_progress_timer_expired)\n> return;\n> now = GetCurrentTimestamp();\n> // compute timestamp difference\n> last_startup_progress_timeout = now;\n> reset_startup_progress_timeout();\n\nThis seems a little confusing. As we are setting\nlast_startup_progress_timeout = now and then calling\nreset_startup_progress_timeout() which will calculate the next_time\nbased on the value of last_startup_progress_timeout initially and\nchecks whether next_timeout is less than now. It doesn't make sense to\nme. I feel we should calculate the next_timeout based on the time that\nit is supposed to fire next time. So we should set\nlast_startup_progress_timeout = next_timeout after enabling the timer.\nAlso I feel with the 2 functions mentioned above, we also need\nInitStartupProgress() which sets the initial value to\nstartupProcessOpStartTime which is used to calculate the time\ndifference and display in the logs. I could see those functions like\nbelow.\n\nInitStartupProgress(void)\n{\n startupProcessOpStartTime = GetCurrentTimestamp();\n ResetStartupProgressTimeout(startupProcessOpStartTime);\n}\n\nreset_startup_progress_timeout(TimeStampTz now)\n{\n next_timeout = last_startup_progress_timeout + interval;\n if (next_timeout < now)\n {\n // Either the timeout was processed so late that we missed an entire cycle,\n // or the system clock was set backwards.\n next_timeout = now + interval;\n }\n enable_timeout_at(next_timeout);\n last_startup_progress_timeout = next_timeout;\n}\n\nstartup_progress_timeout_has_expired()\n{\n if (!startup_progress_timer_expired)\n return;\n now = GetCurrentTimestamp();\n // compute timestamp difference based on startupProcessOpStartTime\n reset_startup_progress_timeout(now);\n}\n\nKindly share your thoughts and correct me if I am wrong.\n\n> I think we also should consider where to put the new functions\n> introduced by this patch, and the GUC. You put them in xlog.c which is\n> reasonable since that is where StartupXLOG() lives. However, xlog.c is\n> also a gigantic file, and xlog.h is included in a lot of places, most\n> of which aren't going to care about the new things you're adding to\n> that file at all. So I'm thinking it might make more sense to put the\n> new code in src/backend/postmaster/startup.c. That is actually a\n> better thematic fit given that this is really about the startup\n> process specifically, not WAL-logging in general. Then reinit.c would\n> include startup.h instead of xlog.h, which seems better, because I\n> don't think we want any actual xlog operations to happen from within\n> that file, so better not to be including xlog.h.\n>\n> The patch currently lacks documentation. It needs to update config.sgml.\n\nI agree and I will take care in the next patch.\n\nThanks & Regards,\nNitin Jadhav\n\n\n\nOn Tue, Aug 3, 2021 at 8:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Aug 3, 2021 at 2:48 AM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> > Implemented the above approach and verified the patch. Kindly have a\n> > look and share your thoughts.\n>\n> +LogStartupProgressTimerExpired(bool force, long *secs, int *usecs)\n>\n> The name of this function begins with \"log\" but it does not log\n> anything, so that's probably a sign that you should rethink the name\n> of the function. I suggested startup_progress_timer_expired()\n> upthread, but now I think maybe we should call it\n> startup_progress_timer_has_expired() and then renaming the Boolean\n> you've called logStartupProgressTimeout to\n> startup_progress_timer_expired. Also, the argument \"bool force\"\n> doesn't really make sense for this function, which is why I suggested\n> above calling it \"bool done\" instead.\n>\n> + elapsed_ms = (seconds * 1000) + (useconds / 1000);\n> + interval_in_ms = log_startup_progress_interval * 1000;\n> + enable_timeout_after(LOG_STARTUP_PROGRESS_TIMEOUT,\n> + (interval_in_ms - (elapsed_ms % interval_in_ms)));\n>\n> This will work correctly only if elapsed_ms is equal to interval_in_ms\n> or slightly greater than interval_ms. But if elapsed_ms is greater\n> than two times interval_ms, then it will produce pretty much random\n> results. If elapsed_ms is negative because the system clock gets set\n> backward, a possibility I've already mentioned to you in a previous\n> review, then it will also misbehave. I actually don't think\n> enable_timeout_after() is very easy to use for this kind of thing. At\n> least for me, it's way easier to think about calculating the timestamp\n> at which I want the timer to expire. Maybe something along these\n> lines:\n>\n> next_timeout = last_startup_progress_timeout + interval;\n> if (next_timeout < now)\n> {\n> // Either the timeout was processed so late that we missed an entire cycle,\n> // or the system clock was set backwards.\n> next_timeout = now + interval;\n> }\n> enable_timeout_at(next_timeout);\n>\n> Also, I said before that I thought it was OK that you were logging a\n> line at the end of every operation as well as after every N\n> milliseconds. But, the more I think about it, the less I like it. We\n> already have a 'redo done' line that shows up at the end of redo,\n> which the patch wisely does not duplicate. But it's not quite clear\n> that any of these other things are important enough to bother\n> mentioning in the log unless they take a long time. After an immediate\n> shutdown of an empty cluster, with this patch applied, I get 3 extra\n> log messages:\n>\n> 2021-08-03 10:17:49.630 EDT [17567] LOG: data directory sync (fsync)\n> complete after 0.13 s\n> 2021-08-03 10:17:49.633 EDT [17567] LOG: resetting unlogged relations\n> (cleanup) complete after 0.00 s\n> 2021-08-03 10:17:49.635 EDT [17567] LOG: resetting unlogged relations\n> (init) complete after 0.00 s\n>\n> That doesn't seem like information anyone really needs. If it had\n> taken a long time, it would have been worth logging, but in the normal\n> case where it doesn't, it's just clutter. Another funny thing is that,\n> as you've coded it, those additional log lines only appear when\n> log_startup_progress_interval != 0. That's strange. It seems\n> particularly strange because of the existing precedent where 'redo\n> done' appears regardless of any setting, but also because when I set,\n> say, a 10s interval, I guess I expect something to happen every 10s.\n> Making something happen once at the end is different.\n>\n> So I think we should take this out, which would permit simplifying a\n> bunch of code.The four places where you call\n> ereport_startup_progress(true, ...) would go away.\n> ereport_startup_progress() would no longer need a Boolean argument,\n> and neither would LogStartupProgressTimerExpired() /\n> startup_progress_timer_has_expired(). Note that there's no real need\n> to disable the timeout when we're done with it. It's fine if we do,\n> but if we don't, it's also not a big deal; all that happens if we\n> leave the timer scheduled and let it expire is that it will set a\n> Boolean flag that nobody will care about. So what I'm thinking about\n> is that we could just have, say, reset_startup_progress_timeout() and\n> startup_progress_timeout_has_expired().\n> reset_startup_progress_timeout() would just do exactly what I showed\n> above to reset the timeout, and you'd call it at the beginning of each\n> phase. And startup_progress_timeout_has_expired() would look roughly\n> like this:\n>\n> if (!startup_progress_timer_expired)\n> return;\n> now = GetCurrentTimestamp();\n> // compute timestamp difference\n> last_startup_progress_timeout = now;\n> reset_startup_progress_timeout();\n>\n> With these changes you'd have only 1 place in the code that needs to\n> care about log_startup_progress_interval, as opposed to 3 as you have\n> it currently, and only one place that enables the timeout, as opposed\n> to 2 as you have it currently. I think that would be tidier.\n>\n> I think we also should consider where to put the new functions\n> introduced by this patch, and the GUC. You put them in xlog.c which is\n> reasonable since that is where StartupXLOG() lives. However, xlog.c is\n> also a gigantic file, and xlog.h is included in a lot of places, most\n> of which aren't going to care about the new things you're adding to\n> that file at all. So I'm thinking it might make more sense to put the\n> new code in src/backend/postmaster/startup.c. That is actually a\n> better thematic fit given that this is really about the startup\n> process specifically, not WAL-logging in general. Then reinit.c would\n> include startup.h instead of xlog.h, which seems better, because I\n> don't think we want any actual xlog operations to happen from within\n> that file, so better not to be including xlog.h.\n>\n> The patch currently lacks documentation. It needs to update config.sgml.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 5 Aug 2021 17:11:38 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Thu, Aug 5, 2021 at 7:41 AM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> This seems a little confusing. As we are setting\n> last_startup_progress_timeout = now and then calling\n> reset_startup_progress_timeout() which will calculate the next_time\n> based on the value of last_startup_progress_timeout initially and\n> checks whether next_timeout is less than now. It doesn't make sense to\n> me. I feel we should calculate the next_timeout based on the time that\n> it is supposed to fire next time. So we should set\n> last_startup_progress_timeout = next_timeout after enabling the timer.\n> Also I feel with the 2 functions mentioned above, we also need\n> InitStartupProgress() which sets the initial value to\n> startupProcessOpStartTime which is used to calculate the time\n> difference and display in the logs. I could see those functions like\n> below.\n>\n> InitStartupProgress(void)\n> {\n> startupProcessOpStartTime = GetCurrentTimestamp();\n> ResetStartupProgressTimeout(startupProcessOpStartTime);\n> }\n\nThis makes sense, but I think I'd like to have all the functions in\nthis patch use names_like_this() rather than NamesLikeThis().\n\n> reset_startup_progress_timeout(TimeStampTz now)\n> {\n> next_timeout = last_startup_progress_timeout + interval;\n> if (next_timeout < now)\n> {\n> // Either the timeout was processed so late that we missed an entire cycle,\n> // or the system clock was set backwards.\n> next_timeout = now + interval;\n> }\n> enable_timeout_at(next_timeout);\n> last_startup_progress_timeout = next_timeout;\n> }\n\nHmm, yeah, that seems good, but given this change, maybe the variables\nneed a little renaming. Like change last_startup_progress_timeout to\nscheduled_startup_progress_timeout, perhaps.\n\n> startup_progress_timeout_has_expired()\n> {\n> if (!startup_progress_timer_expired)\n> return;\n> now = GetCurrentTimestamp();\n> // compute timestamp difference based on startupProcessOpStartTime\n> reset_startup_progress_timeout(now);\n> }\n\nI guess this one needs to return a Boolean, actually.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 5 Aug 2021 10:19:41 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "Modified the reset_startup_progress_timeout() to take the second\nparameter which indicates whether it is called for initialization or\nfor resetting. Without this parameter there is a problem if we call\ninit_startup_progress() more than one time if there is no call to\nereport_startup_progress() in between as the code related to disabling\nthe timer has been removed.\n\nreset_startup_progress_timeout(TimeStampTz now, bool is_init)\n{\n if (is_init)\n next_timeout = now + interval;\n else\n {\n next_timeout = scheduled_startup_progress_timeout + interval;\n if (next_timeout < now)\n {\n // Either the timeout was processed so late that we missed an\nentire cycle,\n // or the system clock was set backwards.\n next_timeout = now + interval;\n }\n enable_timeout_at(next_timeout);\n scheduled_startup_progress_timeout = next_timeout;\n }\n}\n\nI have incorporated this in the attached patch. Please correct me if I am wrong.\n\n> This makes sense, but I think I'd like to have all the functions in\n> this patch use names_like_this() rather than NamesLikeThis().\n\nI have changed all the function names accordingly. But I would like to\nknow why it should be names_like_this() as there are many functions\nwith the format NamesLikeThis(). I would like to understand when to\nuse what, just to incorporate in the future patches.\n\n> Hmm, yeah, that seems good, but given this change, maybe the variables\n> need a little renaming. Like change last_startup_progress_timeout to\n> scheduled_startup_progress_timeout, perhaps.\n\nRight. Changed the variable name.\n\n> I guess this one needs to return a Boolean, actually.\n\nYes.\n\n> reset_startup_progress_timeout(TimeStampTz now)\n> {\n> next_timeout = last_startup_progress_timeout + interval;\n> if (next_timeout < now)\n> {\n> // Either the timeout was processed so late that we missed an entire cycle,\n> // or the system clock was set backwards.\n> next_timeout = now + interval;\n> }\n> enable_timeout_at(next_timeout);\n> last_startup_progress_timeout = next_timeout;\n> }\n\nAs per the above logic, I would like to discuss 2 cases.\n\nCase-1:\nFirst scheduled timeout is after 1 sec. But the time out occurred\nafter 1.5 sec. So the log msg prints after 1.5 sec. Next timer is\nscheduled after 2 sec (scheduled_startup_progress_timeout + interval).\nThe condition (next_timeout < now) gets evaluated to false and\neverything goes smooth.\n\nCase-2:\nFirst scheduled timeout is after 1 sec. But the timeout occurred after\n2.5 sec. So the log msg prints after 2.5 sec. Now next time is\nscheduled after 2 sec (scheduled_startup_progress_timeout + interval).\nSo the condition (next_timeout < now) will fail and the next_timeout\nwill get reset to 3.5 sec (2.5 + 1) and it continues. Is this\nbehaviour ok or should we set the next_timeout to 3 sec. Please share\nyour thoughts on this.\n\nThanks & Regards,\nNitin Jadhav\n\n\nOn Thu, Aug 5, 2021 at 7:49 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Aug 5, 2021 at 7:41 AM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> > This seems a little confusing. As we are setting\n> > last_startup_progress_timeout = now and then calling\n> > reset_startup_progress_timeout() which will calculate the next_time\n> > based on the value of last_startup_progress_timeout initially and\n> > checks whether next_timeout is less than now. It doesn't make sense to\n> > me. I feel we should calculate the next_timeout based on the time that\n> > it is supposed to fire next time. So we should set\n> > last_startup_progress_timeout = next_timeout after enabling the timer.\n> > Also I feel with the 2 functions mentioned above, we also need\n> > InitStartupProgress() which sets the initial value to\n> > startupProcessOpStartTime which is used to calculate the time\n> > difference and display in the logs. I could see those functions like\n> > below.\n> >\n> > InitStartupProgress(void)\n> > {\n> > startupProcessOpStartTime = GetCurrentTimestamp();\n> > ResetStartupProgressTimeout(startupProcessOpStartTime);\n> > }\n>\n> This makes sense, but I think I'd like to have all the functions in\n> this patch use names_like_this() rather than NamesLikeThis().\n>\n> > reset_startup_progress_timeout(TimeStampTz now)\n> > {\n> > next_timeout = last_startup_progress_timeout + interval;\n> > if (next_timeout < now)\n> > {\n> > // Either the timeout was processed so late that we missed an entire cycle,\n> > // or the system clock was set backwards.\n> > next_timeout = now + interval;\n> > }\n> > enable_timeout_at(next_timeout);\n> > last_startup_progress_timeout = next_timeout;\n> > }\n>\n> Hmm, yeah, that seems good, but given this change, maybe the variables\n> need a little renaming. Like change last_startup_progress_timeout to\n> scheduled_startup_progress_timeout, perhaps.\n>\n> > startup_progress_timeout_has_expired()\n> > {\n> > if (!startup_progress_timer_expired)\n> > return;\n> > now = GetCurrentTimestamp();\n> > // compute timestamp difference based on startupProcessOpStartTime\n> > reset_startup_progress_timeout(now);\n> > }\n>\n> I guess this one needs to return a Boolean, actually.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com", "msg_date": "Mon, 9 Aug 2021 20:50:59 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Mon, Aug 9, 2021 at 11:20 AM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> Modified the reset_startup_progress_timeout() to take the second\n> parameter which indicates whether it is called for initialization or\n> for resetting. Without this parameter there is a problem if we call\n> init_startup_progress() more than one time if there is no call to\n> ereport_startup_progress() in between as the code related to disabling\n> the timer has been removed.\n\nI'd really like to avoid this. I don't see why it's necessary. You say\nit causes a problem, but you don't explain what problem it causes.\nenable_timeout_at() will disable the timer if not already done. I\nthink all we need to do is set scheduled_startup_progress_timeout = 0\nbefore calling reset_startup_progress_timeout() in the \"init\" case and\ndon't do that for the non-init case. If that's not quite right, maybe\nyou can work out something that does work. But adding an is_init flag\nto a function and having no common code between the is_init = true\ncase and the is_init = false case is exactly the kind of thing that I\ndon't want here. I want as much common code as possible.\n\n> > This makes sense, but I think I'd like to have all the functions in\n> > this patch use names_like_this() rather than NamesLikeThis().\n>\n> I have changed all the function names accordingly. But I would like to\n> know why it should be names_like_this() as there are many functions\n> with the format NamesLikeThis(). I would like to understand when to\n> use what, just to incorporate in the future patches.\n\nThere is, unfortunately, no hard-and-fast rule, but we want to\nmaintain as much consistency with the existing style as we can. I\nwasn't initially sure what would work best for this particular patch,\nbut after we committed to a name like ereport_startup_progress() that\nto me was a strong hint in favor of using names_like_this()\nthroughout. It seems impossible to imagine punctuating it as\nEreportStartupProgress() or something since that would be wildly\ninconsistent with the existing function name, and there seems to be no\ngood reason why this patch can't be internally consistent.\n\nTo some degree, we tend to use names_like_this() for low-level\nfunctions and NamesLikeThis() for higher-level functions, but that is\nnot a very consistent practice.\n\n> > reset_startup_progress_timeout(TimeStampTz now)\n> > {\n> > next_timeout = last_startup_progress_timeout + interval;\n> > if (next_timeout < now)\n> > {\n> > // Either the timeout was processed so late that we missed an entire cycle,\n> > // or the system clock was set backwards.\n> > next_timeout = now + interval;\n> > }\n> > enable_timeout_at(next_timeout);\n> > last_startup_progress_timeout = next_timeout;\n> > }\n>\n> As per the above logic, I would like to discuss 2 cases.\n>\n> Case-1:\n> First scheduled timeout is after 1 sec. But the time out occurred\n> after 1.5 sec. So the log msg prints after 1.5 sec. Next timer is\n> scheduled after 2 sec (scheduled_startup_progress_timeout + interval).\n> The condition (next_timeout < now) gets evaluated to false and\n> everything goes smooth.\n>\n> Case-2:\n> First scheduled timeout is after 1 sec. But the timeout occurred after\n> 2.5 sec. So the log msg prints after 2.5 sec. Now next time is\n> scheduled after 2 sec (scheduled_startup_progress_timeout + interval).\n> So the condition (next_timeout < now) will fail and the next_timeout\n> will get reset to 3.5 sec (2.5 + 1) and it continues. Is this\n> behaviour ok or should we set the next_timeout to 3 sec. Please share\n> your thoughts on this.\n\nI can't quite follow this, because it seems like you are sometimes\nviewing the interval as 1 second and sometimes as 2 seconds. Maybe you\ncould clarify that, and perhaps show example output?\n\nMy feeling is that the timer will almost always be slightly late, but\nit will very rarely be extremely late, and it will also very rarely be\nearly (only if someone resets the system clock). So let's consider\nthose two cases separately. If the timer is a little bit late each\ntime, we want to avoid drift, so we want to shorten the next sleep\ntime by the amount that the previous interrupt was late. If the\ninterval is 1000ms and the interrupt fires 1ms late then we should\nsleep 999ms the next time; if 2ms late, 998ms. That way, although\nthere will be some variation in which the messages are logged, the\ndrift won't accumulate over time and even after many minutes of\nrecovery the messages will be printed at ABOUT the same number of\nmilliseconds after the second every time, instead of drifting further\nand further off course.\n\nBut this strategy cannot be used if the drift is larger than the\ninterval. If we are trying to log a message every 1000ms and the timer\ndoesn't fire for 14ms, we can wait only 986ms the next time. If it\ndoesn't fire for 140ms, we can wait only 860ms the next time. But if\nthe timer doesn't fire for 1400ms, we cannot wait for -400ms the next\ntime. So what should we do? My proposal is to just wait for the\nconfigured interval, 1000ms, essentially giving up on drift\ncorrection. Now you could argue that we ought to just wait for 600ms\nin the hopes of making it 2 * 1000ms after the previous status\nmessage, but I'm not sure that really has any value, and it doesn't\nseem especially likely to work. The only way timer interrupts are\nlikely to be that badly delayed is if the system is horrifically\noverloaded, and if that's the case the next timer interrupt isn't\nlikely to fire on schedule anyway. Trying to correct for drift in such\na situation seems more likely to be confusing than to produce any\nhelpful result.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 9 Aug 2021 15:35:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "> I'd really like to avoid this. I don't see why it's necessary. You say\n> it causes a problem, but you don't explain what problem it causes.\n> enable_timeout_at() will disable the timer if not already done. I\n> think all we need to do is set scheduled_startup_progress_timeout = 0\n> before calling reset_startup_progress_timeout() in the \"init\" case and\n> don't do that for the non-init case. If that's not quite right, maybe\n> you can work out something that does work. But adding an is_init flag\n> to a function and having no common code between the is_init = true\n> case and the is_init = false case is exactly the kind of thing that I\n> don't want here. I want as much common code as possible.\n\nSetting set scheduled_startup_progress_timeout = 0 in the \"init\" case\nsolves the problem. The problem was if we call init_start_progress()\ncontinuously then the first call to reset_startup_progress_timeout()\nsets the value of scheduled_startup_progress_timeout to \"now +\ninterval\". Later call to reset_startup_progress_timeout() uses the\npreviously set value of scheduled_startup_progress_timeout which was\nnot correct and it was not behaving as expected. I could see that the\nfirst log gets printed far later than the expected interval.\n\n> To some degree, we tend to use names_like_this() for low-level\n> functions and NamesLikeThis() for higher-level functions, but that is\n> not a very consistent practice.\n\nOk. Thanks for the information.\n\n> But this strategy cannot be used if the drift is larger than the\n> interval. If we are trying to log a message every 1000ms and the timer\n> doesn't fire for 14ms, we can wait only 986ms the next time. If it\n> doesn't fire for 140ms, we can wait only 860ms the next time. But if\n> the timer doesn't fire for 1400ms, we cannot wait for -400ms the next\n> time. So what should we do? My proposal is to just wait for the\n> configured interval, 1000ms, essentially giving up on drift\n> correction. Now you could argue that we ought to just wait for 600ms\n> in the hopes of making it 2 * 1000ms after the previous status\n> message, but I'm not sure that really has any value, and it doesn't\n> seem especially likely to work. The only way timer interrupts are\n> likely to be that badly delayed is if the system is horrifically\n> overloaded, and if that's the case the next timer interrupt isn't\n> likely to fire on schedule anyway. Trying to correct for drift in such\n> a situation seems more likely to be confusing than to produce any\n> helpful result.\n\nThis is what I was trying to convey in case-2. I agree that it is\nbetter to consider \"now + interval\" in such a case instead of trying\nto adjust the drift.\n\nPlease find the updated patch attached.\n\nOn Tue, Aug 10, 2021 at 1:06 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Aug 9, 2021 at 11:20 AM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> > Modified the reset_startup_progress_timeout() to take the second\n> > parameter which indicates whether it is called for initialization or\n> > for resetting. Without this parameter there is a problem if we call\n> > init_startup_progress() more than one time if there is no call to\n> > ereport_startup_progress() in between as the code related to disabling\n> > the timer has been removed.\n>\n> I'd really like to avoid this. I don't see why it's necessary. You say\n> it causes a problem, but you don't explain what problem it causes.\n> enable_timeout_at() will disable the timer if not already done. I\n> think all we need to do is set scheduled_startup_progress_timeout = 0\n> before calling reset_startup_progress_timeout() in the \"init\" case and\n> don't do that for the non-init case. If that's not quite right, maybe\n> you can work out something that does work. But adding an is_init flag\n> to a function and having no common code between the is_init = true\n> case and the is_init = false case is exactly the kind of thing that I\n> don't want here. I want as much common code as possible.\n>\n> > > This makes sense, but I think I'd like to have all the functions in\n> > > this patch use names_like_this() rather than NamesLikeThis().\n> >\n> > I have changed all the function names accordingly. But I would like to\n> > know why it should be names_like_this() as there are many functions\n> > with the format NamesLikeThis(). I would like to understand when to\n> > use what, just to incorporate in the future patches.\n>\n> There is, unfortunately, no hard-and-fast rule, but we want to\n> maintain as much consistency with the existing style as we can. I\n> wasn't initially sure what would work best for this particular patch,\n> but after we committed to a name like ereport_startup_progress() that\n> to me was a strong hint in favor of using names_like_this()\n> throughout. It seems impossible to imagine punctuating it as\n> EreportStartupProgress() or something since that would be wildly\n> inconsistent with the existing function name, and there seems to be no\n> good reason why this patch can't be internally consistent.\n>\n> To some degree, we tend to use names_like_this() for low-level\n> functions and NamesLikeThis() for higher-level functions, but that is\n> not a very consistent practice.\n>\n> > > reset_startup_progress_timeout(TimeStampTz now)\n> > > {\n> > > next_timeout = last_startup_progress_timeout + interval;\n> > > if (next_timeout < now)\n> > > {\n> > > // Either the timeout was processed so late that we missed an entire cycle,\n> > > // or the system clock was set backwards.\n> > > next_timeout = now + interval;\n> > > }\n> > > enable_timeout_at(next_timeout);\n> > > last_startup_progress_timeout = next_timeout;\n> > > }\n> >\n> > As per the above logic, I would like to discuss 2 cases.\n> >\n> > Case-1:\n> > First scheduled timeout is after 1 sec. But the time out occurred\n> > after 1.5 sec. So the log msg prints after 1.5 sec. Next timer is\n> > scheduled after 2 sec (scheduled_startup_progress_timeout + interval).\n> > The condition (next_timeout < now) gets evaluated to false and\n> > everything goes smooth.\n> >\n> > Case-2:\n> > First scheduled timeout is after 1 sec. But the timeout occurred after\n> > 2.5 sec. So the log msg prints after 2.5 sec. Now next time is\n> > scheduled after 2 sec (scheduled_startup_progress_timeout + interval).\n> > So the condition (next_timeout < now) will fail and the next_timeout\n> > will get reset to 3.5 sec (2.5 + 1) and it continues. Is this\n> > behaviour ok or should we set the next_timeout to 3 sec. Please share\n> > your thoughts on this.\n>\n> I can't quite follow this, because it seems like you are sometimes\n> viewing the interval as 1 second and sometimes as 2 seconds. Maybe you\n> could clarify that, and perhaps show example output?\n>\n> My feeling is that the timer will almost always be slightly late, but\n> it will very rarely be extremely late, and it will also very rarely be\n> early (only if someone resets the system clock). So let's consider\n> those two cases separately. If the timer is a little bit late each\n> time, we want to avoid drift, so we want to shorten the next sleep\n> time by the amount that the previous interrupt was late. If the\n> interval is 1000ms and the interrupt fires 1ms late then we should\n> sleep 999ms the next time; if 2ms late, 998ms. That way, although\n> there will be some variation in which the messages are logged, the\n> drift won't accumulate over time and even after many minutes of\n> recovery the messages will be printed at ABOUT the same number of\n> milliseconds after the second every time, instead of drifting further\n> and further off course.\n>\n> But this strategy cannot be used if the drift is larger than the\n> interval. If we are trying to log a message every 1000ms and the timer\n> doesn't fire for 14ms, we can wait only 986ms the next time. If it\n> doesn't fire for 140ms, we can wait only 860ms the next time. But if\n> the timer doesn't fire for 1400ms, we cannot wait for -400ms the next\n> time. So what should we do? My proposal is to just wait for the\n> configured interval, 1000ms, essentially giving up on drift\n> correction. Now you could argue that we ought to just wait for 600ms\n> in the hopes of making it 2 * 1000ms after the previous status\n> message, but I'm not sure that really has any value, and it doesn't\n> seem especially likely to work. The only way timer interrupts are\n> likely to be that badly delayed is if the system is horrifically\n> overloaded, and if that's the case the next timer interrupt isn't\n> likely to fire on schedule anyway. Trying to correct for drift in such\n> a situation seems more likely to be confusing than to produce any\n> helpful result.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com", "msg_date": "Tue, 10 Aug 2021 18:58:38 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Tue, Aug 10, 2021 at 9:28 AM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> Please find the updated patch attached.\n\nI think this is getting close. The output looks nice. However, I still\nsee a bunch of issues.\n\nYou mentioned previously that you would add documentation, but I do\nnot see it here.\n\nstartup_progress_timer_expired should be declared as sig_atomic_t like\nwe do in other cases (see interrupt.c).\n\nThe default value of the new GUC is 10s in postgresql.conf.sample, but\n-1 in guc.c. They should both be 10s, and the one in\npostgresql.conf.sample should be commented out.\n\nI suggest making the GUC GUC_UNIT_MS rather than GUC_UNIT_S, but\nexpressing the default in postgresl.conf.sample as 10s rather than\n10000ms. I tried values measured in milliseconds just for testing\npurposes and didn't initially understand why it wasn't working. I\ndon't think there's any reason it can't work.\n\nI would prefer to see log_startup_progress_interval declared and\ndefined in startup.c/startup.h rather than guc.c/guc.h.\n\nThere's no precedent in the tree for the use of ##__VA_ARGS__. On my\nsystem it seems to work fine if I just leave out the ##. Any reason\nnot to do that?\n\nTwo of the declarations in startup.h forgot the leading \"extern\",\nwhile the other two that are right next to them have it, matching\nproject style.\n\nI'm reasonably happy with most of the identifier names now, but I\nthink init_startup_progress() is confusing. The reason I think that is\nthat we call it more than once, which is not really what people think\nabout when they think of an \"init\" function, I think. It's not\ninitializing the startup progress facility in general; it's preparing\nfor the next phase of startup progress reporting. How about renaming\nit to begin_startup_progress_phase()? And then\nstartup_process_op_start_time could be\nstartup_process_phase_start_time to match.\n\nSyncDataDirectory() potentially walks over the data directory three\ntimes: once to call do_syncfs(), once to call pre_sync_fname(), and\nonce to call datadir_fsync_fname(). With this patch, the first and\nthird will emit progress messages if the operation runs long, but the\nsecond will not. I think they should all be treated the same. Also,\nthe locations where you've inserted calls to init_startup_progress()\ndon't really make it clear with what code that's associated. I'd put\nthem *immediately* before the call to do_syncfs() or walkdir().\n\nRemember that PostgreSQL comments are typically written \"telegraph\nstyle,\" so function comments should say \"Does whatever\" not \"It does\nwhatever\". Most of them are correct, but there's one sentence you need\nto fix.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 10 Aug 2021 11:25:55 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "> startup_progress_timer_expired should be declared as sig_atomic_t like\n> we do in other cases (see interrupt.c).\n\nFixed.\n\n> The default value of the new GUC is 10s in postgresql.conf.sample, but\n> -1 in guc.c. They should both be 10s, and the one in\n> postgresql.conf.sample should be commented out.\n\nFixed.\n\n> I suggest making the GUC GUC_UNIT_MS rather than GUC_UNIT_S, but\n> expressing the default in postgresl.conf.sample as 10s rather than\n> 10000ms. I tried values measured in milliseconds just for testing\n> purposes and didn't initially understand why it wasn't working. I\n> don't think there's any reason it can't work.\n\nAs suggested, I have changed it to GUC_UNIT_MS and kept the default\nvalue to 10s. I would like to know the reason why it can't be\nGUC_UNIT_S as we are expressing the values in terms of seconds.\n\n> I would prefer to see log_startup_progress_interval declared and\n> defined in startup.c/startup.h rather than guc.c/guc.h.\n\nFixed.\n\n> There's no precedent in the tree for the use of ##__VA_ARGS__. On my\n> system it seems to work fine if I just leave out the ##. Any reason\n> not to do that?\n\nI had added this to support if no variable argument are passed to the\nmacro. For example, in the previous patches we used to log the\nprogress at the end of the operation like\n\"ereport_startup_progress(true, \"data directory sync (syncfs) complete\nafter %ld.%02d s\");\". Since these calls are removed now, ## is not\nrequired. Hence removed in the attached patch.\n\n> Two of the declarations in startup.h forgot the leading \"extern\",\n> while the other two that are right next to them have it, matching\n> project style.\n\nI had not added extern since those function were not used in the other\nfiles. Now added to match the project style.\n\n> I'm reasonably happy with most of the identifier names now, but I\n> think init_startup_progress() is confusing. The reason I think that is\n> that we call it more than once, which is not really what people think\n> about when they think of an \"init\" function, I think. It's not\n> initializing the startup progress facility in general; it's preparing\n> for the next phase of startup progress reporting. How about renaming\n> it to begin_startup_progress_phase()? And then\n> startup_process_op_start_time could be\n> startup_process_phase_start_time to match.\n\nYes begin_startup_progress_phase() looks more appropriate. Instead of\nstartup_process_phase_start_time, startup_progress_phase_start_time\nlooks more appropriate. Changed these in the attached patch.\n\n> SyncDataDirectory() potentially walks over the data directory three\n> times: once to call do_syncfs(), once to call pre_sync_fname(), and\n> once to call datadir_fsync_fname(). With this patch, the first and\n> third will emit progress messages if the operation runs long, but the\n> second will not. I think they should all be treated the same. Also,\n> the locations where you've inserted calls to init_startup_progress()\n> don't really make it clear with what code that's associated. I'd put\n> them *immediately* before the call to do_syncfs() or walkdir().\n\nFixed.\n\n> Remember that PostgreSQL comments are typically written \"telegraph\n> style,\" so function comments should say \"Does whatever\" not \"It does\n> whatever\". Most of them are correct, but there's one sentence you need\n> to fix.\n\nFixed in the function comments of\nstartup_progress_timeout_has_expired(). Please let me now if this is\nnot the one you wanted me to correct.\n\n> You mentioned previously that you would add documentation, but I do\n> not see it here.\n\nSorry. I missed this. I have added the documentation in the attached patch.\nOn Tue, Aug 10, 2021 at 8:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Aug 10, 2021 at 9:28 AM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> > Please find the updated patch attached.\n>\n> I think this is getting close. The output looks nice. However, I still\n> see a bunch of issues.\n>\n> You mentioned previously that you would add documentation, but I do\n> not see it here.\n>\n> startup_progress_timer_expired should be declared as sig_atomic_t like\n> we do in other cases (see interrupt.c).\n>\n> The default value of the new GUC is 10s in postgresql.conf.sample, but\n> -1 in guc.c. They should both be 10s, and the one in\n> postgresql.conf.sample should be commented out.\n>\n> I suggest making the GUC GUC_UNIT_MS rather than GUC_UNIT_S, but\n> expressing the default in postgresl.conf.sample as 10s rather than\n> 10000ms. I tried values measured in milliseconds just for testing\n> purposes and didn't initially understand why it wasn't working. I\n> don't think there's any reason it can't work.\n>\n> I would prefer to see log_startup_progress_interval declared and\n> defined in startup.c/startup.h rather than guc.c/guc.h.\n>\n> There's no precedent in the tree for the use of ##__VA_ARGS__. On my\n> system it seems to work fine if I just leave out the ##. Any reason\n> not to do that?\n>\n> Two of the declarations in startup.h forgot the leading \"extern\",\n> while the other two that are right next to them have it, matching\n> project style.\n>\n> I'm reasonably happy with most of the identifier names now, but I\n> think init_startup_progress() is confusing. The reason I think that is\n> that we call it more than once, which is not really what people think\n> about when they think of an \"init\" function, I think. It's not\n> initializing the startup progress facility in general; it's preparing\n> for the next phase of startup progress reporting. How about renaming\n> it to begin_startup_progress_phase()? And then\n> startup_process_op_start_time could be\n> startup_process_phase_start_time to match.\n>\n> SyncDataDirectory() potentially walks over the data directory three\n> times: once to call do_syncfs(), once to call pre_sync_fname(), and\n> once to call datadir_fsync_fname(). With this patch, the first and\n> third will emit progress messages if the operation runs long, but the\n> second will not. I think they should all be treated the same. Also,\n> the locations where you've inserted calls to init_startup_progress()\n> don't really make it clear with what code that's associated. I'd put\n> them *immediately* before the call to do_syncfs() or walkdir().\n>\n> Remember that PostgreSQL comments are typically written \"telegraph\n> style,\" so function comments should say \"Does whatever\" not \"It does\n> whatever\". Most of them are correct, but there's one sentence you need\n> to fix.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com", "msg_date": "Thu, 12 Aug 2021 17:10:17 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Thu, Aug 12, 2021 at 7:40 AM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> > I suggest making the GUC GUC_UNIT_MS rather than GUC_UNIT_S, but\n> > expressing the default in postgresl.conf.sample as 10s rather than\n> > 10000ms. I tried values measured in milliseconds just for testing\n> > purposes and didn't initially understand why it wasn't working. I\n> > don't think there's any reason it can't work.\n>\n> As suggested, I have changed it to GUC_UNIT_MS and kept the default\n> value to 10s. I would like to know the reason why it can't be\n> GUC_UNIT_S as we are expressing the values in terms of seconds.\n\nI mean, it *could* be. There's just no advantage. Values in seconds\nwill work correctly either way. But values in milliseconds will only\nwork if it's GUC_UNIT_MS. It seems to me that it's better to make more\nthings work rather than fewer.\n\n> > There's no precedent in the tree for the use of ##__VA_ARGS__. On my\n> > system it seems to work fine if I just leave out the ##. Any reason\n> > not to do that?\n>\n> I had added this to support if no variable argument are passed to the\n> macro. For example, in the previous patches we used to log the\n> progress at the end of the operation like\n> \"ereport_startup_progress(true, \"data directory sync (syncfs) complete\n> after %ld.%02d s\");\". Since these calls are removed now, ## is not\n> required. Hence removed in the attached patch.\n\nHmm, OK. That's actually a pretty good reason for using ## there. It\njust made me nervous because we have no similar uses of ## in the\nbackend code. We rely on it elsewhere for concatenation, but not for\ncomma removal. Let's try leaving it out for now unless somebody else\nshows up with a different opinion.\n\n> I had not added extern since those function were not used in the other\n> files. Now added to match the project style.\n\nAnything that's not used in other files should be declared static in\nthe file itself, and not present in the header. Once you fix this for\nreset_startup_progress_timeout, the header won't need to include\ndatatype/timestamp.h any more, which is good, because we don't want\nheader files to depend on more other header files than necessary.\n\nLooking over this version, I realized something I (or you) should have\nnoticed sooner: you've added the RegisterTimeout call to\nInitPostgres(), but that's for things that are used by all backends,\nand this is only used by the startup process. So it seems to me that\nthe right place is StartupProcessMain. That would have the further\nadvantage of allowing startup_progress_timeout_handler to be made\nstatic. begin_startup_progress_phase() and\nstartup_progress_timeout_has_expired() are the actual API functions\nthough so they will need to remain extern.\n\n@@ -679,7 +680,6 @@ static char *recovery_target_lsn_string;\n /* should be static, but commands/variable.c needs to get at this */\n char *role_string;\n\n-\n /*\n * Displayable names for context types (enum GucContext)\n *\n\nThis hunk should be removed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 12 Aug 2021 10:56:34 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "Should this feature distinguish between crash recovery and archive recovery on\na hot standby ? Otherwise the standby will display this all the time.\n\n2021-08-14 16:13:33.139 CDT startup[11741] LOG: redo in progress, elapsed time: 124.42 s, current LSN: 0/EEE2100\n\nIf so, I think maybe you'd check !InArchiveRecovery (but until Robert finishes\ncleanup of xlog.c variables, I can't say that with much confidence).\n\n\n", "msg_date": "Sat, 14 Aug 2021 16:47:00 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Sat, Aug 14, 2021 at 5:47 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Should this feature distinguish between crash recovery and archive recovery on\n> a hot standby ? Otherwise the standby will display this all the time.\n>\n> 2021-08-14 16:13:33.139 CDT startup[11741] LOG: redo in progress, elapsed time: 124.42 s, current LSN: 0/EEE2100\n>\n> If so, I think maybe you'd check !InArchiveRecovery (but until Robert finishes\n> cleanup of xlog.c variables, I can't say that with much confidence).\n\nHmm. My inclination is to think that on an actual standby, you\nwouldn't want to get messages like this, but if you were doing a\npoint-in-time-recovery, then you would. So I think maybe\nInArchiveRecovery is not the right thing. Perhaps StandbyMode?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 16 Aug 2021 15:38:42 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "> Anything that's not used in other files should be declared static in\n> the file itself, and not present in the header. Once you fix this for\n> reset_startup_progress_timeout, the header won't need to include\n> datatype/timestamp.h any more, which is good, because we don't want\n> header files to depend on more other header files than necessary.\n\nThanks for identifying this. I will take care in the next patch.\n\n> Looking over this version, I realized something I (or you) should have\n> noticed sooner: you've added the RegisterTimeout call to\n> InitPostgres(), but that's for things that are used by all backends,\n> and this is only used by the startup process. So it seems to me that\n> the right place is StartupProcessMain. That would have the further\n> advantage of allowing startup_progress_timeout_handler to be made\n> static. begin_startup_progress_phase() and\n> startup_progress_timeout_has_expired() are the actual API functions\n> though so they will need to remain extern.\n\nYes. I had noticed this earlier and the RegisterTimeout() call was\nonly present in StartupProcessMain() and not in InitPostgres() in the\nearlier versions (v7) of the patch. Since StartupXLOG() gets called in\nthe 2 places, I had restricted the InitPostgres() flow by checking for\nthe !AmStartupProcess() in the newly added functions. But later we had\ndiscussion and concluded to add the RegisterTimeout() call even in\ncase of InitPostgres(). Kindly refer to the discussion just after the\nv7 patch in this thread and let me know your thoughts.\n\n> This hunk should be removed.\n\nI will remove it in the next patch.\n\nOn Tue, Aug 17, 2021 at 1:08 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sat, Aug 14, 2021 at 5:47 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Should this feature distinguish between crash recovery and archive recovery on\n> > a hot standby ? Otherwise the standby will display this all the time.\n> >\n> > 2021-08-14 16:13:33.139 CDT startup[11741] LOG: redo in progress, elapsed time: 124.42 s, current LSN: 0/EEE2100\n> >\n> > If so, I think maybe you'd check !InArchiveRecovery (but until Robert finishes\n> > cleanup of xlog.c variables, I can't say that with much confidence).\n>\n> Hmm. My inclination is to think that on an actual standby, you\n> wouldn't want to get messages like this, but if you were doing a\n> point-in-time-recovery, then you would. So I think maybe\n> InArchiveRecovery is not the right thing. Perhaps StandbyMode?\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 18 Aug 2021 12:23:55 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "> > Anything that's not used in other files should be declared static in\n> > the file itself, and not present in the header. Once you fix this for\n> > reset_startup_progress_timeout, the header won't need to include\n> > datatype/timestamp.h any more, which is good, because we don't want\n> > header files to depend on more other header files than necessary.\n>\n> Thanks for identifying this. I will take care in the next patch.\n\nFixed.\n\n> > This hunk should be removed.\n>\n> I will remove it in the next patch.\n\nRemoved.\n\nPlease find the updated patch attached.\n\nOn Wed, Aug 18, 2021 at 12:23 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > Anything that's not used in other files should be declared static in\n> > the file itself, and not present in the header. Once you fix this for\n> > reset_startup_progress_timeout, the header won't need to include\n> > datatype/timestamp.h any more, which is good, because we don't want\n> > header files to depend on more other header files than necessary.\n>\n> Thanks for identifying this. I will take care in the next patch.\n>\n> > Looking over this version, I realized something I (or you) should have\n> > noticed sooner: you've added the RegisterTimeout call to\n> > InitPostgres(), but that's for things that are used by all backends,\n> > and this is only used by the startup process. So it seems to me that\n> > the right place is StartupProcessMain. That would have the further\n> > advantage of allowing startup_progress_timeout_handler to be made\n> > static. begin_startup_progress_phase() and\n> > startup_progress_timeout_has_expired() are the actual API functions\n> > though so they will need to remain extern.\n>\n> Yes. I had noticed this earlier and the RegisterTimeout() call was\n> only present in StartupProcessMain() and not in InitPostgres() in the\n> earlier versions (v7) of the patch. Since StartupXLOG() gets called in\n> the 2 places, I had restricted the InitPostgres() flow by checking for\n> the !AmStartupProcess() in the newly added functions. But later we had\n> discussion and concluded to add the RegisterTimeout() call even in\n> case of InitPostgres(). Kindly refer to the discussion just after the\n> v7 patch in this thread and let me know your thoughts.\n>\n> > This hunk should be removed.\n>\n> I will remove it in the next patch.\n>\n> On Tue, Aug 17, 2021 at 1:08 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Sat, Aug 14, 2021 at 5:47 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > Should this feature distinguish between crash recovery and archive recovery on\n> > > a hot standby ? Otherwise the standby will display this all the time.\n> > >\n> > > 2021-08-14 16:13:33.139 CDT startup[11741] LOG: redo in progress, elapsed time: 124.42 s, current LSN: 0/EEE2100\n> > >\n> > > If so, I think maybe you'd check !InArchiveRecovery (but until Robert finishes\n> > > cleanup of xlog.c variables, I can't say that with much confidence).\n> >\n> > Hmm. My inclination is to think that on an actual standby, you\n> > wouldn't want to get messages like this, but if you were doing a\n> > point-in-time-recovery, then you would. So I think maybe\n> > InArchiveRecovery is not the right thing. Perhaps StandbyMode?\n> >\n> > --\n> > Robert Haas\n> > EDB: http://www.enterprisedb.com", "msg_date": "Fri, 3 Sep 2021 13:23:56 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Fri, Sep 03, 2021 at 01:23:56PM +0530, Nitin Jadhav wrote:\n> Please find the updated patch attached.\n\nPlease check CA+TgmoZtbqxaOLdpNkBcDbz=41tWALA8kpH4M=RWtPYHC7-KNg@mail.gmail.com\n\nI agree with Robert that a standby server should not continuously show timing\nmessages for WAL replay.\n\nSome doc comments:\n\n+ Sets the time interval between each progress update of the operations\n+ performed during startup process. This produces the log message after\n\nEither say \"performed by the startup process\" or \"performed during startup\".\n\ns/the/a/\n\n+ every interval of time for the operations that take longer time. The unit\n\n..for those operations which take longer than the specified duration.\n\n+ used to specify the value is seconds. For example, if you set it to\n+ <literal> 10s </literal>, then after every <literal> 10s </literal> there\n\nremove \"after\"\n\n+ is a log message indicating which operation is going on and what is the\n\nsay \"..every 10s, a log is emitted indicating which operation is ongoing, and\nthe elapsed time from the beginning of the operation..\"\n\n+ elapsed time from beginning. If the value is set to <literal> 0 </literal>,\n+ then it logs all the available messages for such operations. <literal> -1\n\n\"..then all messages for such operations are logged.\"\n\n+ </literal> disables the feature. The default value is set to <literal> 10s\n+ </literal>\n\n\"The default value is >10s<.\"\n\n\n", "msg_date": "Fri, 3 Sep 2021 21:23:27 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Fri, Sep 03, 2021 at 09:23:27PM -0500, Justin Pryzby wrote:\n> On Fri, Sep 03, 2021 at 01:23:56PM +0530, Nitin Jadhav wrote:\n> > Please find the updated patch attached.\n> \n> Please check CA+TgmoZtbqxaOLdpNkBcDbz=41tWALA8kpH4M=RWtPYHC7-KNg@mail.gmail.com\n> \n> I agree with Robert that a standby server should not continuously show timing\n> messages for WAL replay.\n\nClearly. This should be limited to crash recovery, and maybe there\ncould be some checks to make sure that nothing is logged once a\nconsistent point is reached. Honestly, I don't see why we should have\na GUC for what's proposed here. A value too low would bloat the logs\nwith entries that are not that meaningful. A value too large would\njust prevent access to some information wanted. Wouldn't it be better\nto just pick up a value like 10s or 20s?\n\nLooking at v13..\n\n+ {\"log_startup_progress_interval\", PGC_SIGHUP, LOGGING_WHEN,\n+ gettext_noop(\"Sets the time interval between each progress update \"\n+ \"of the startup process.\"),\n+ gettext_noop(\"0 logs all messages. -1 turns this feature off.\"),\n+ GUC_UNIT_MS,\nThe unit is incorrect here, as that would default to 10ms, contrary to\nwhat the documentation says about 10s.\n--\nMichael", "msg_date": "Tue, 7 Sep 2021 14:28:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "> I also agree that this is the better place to do it. Hence modified\n> the patch accordingly. The condition \"!AmStartupProcess()\" is added to\n> differentiate whether the call is done from a startup process or some\n> other process. Actually StartupXLOG() gets called in 2 places. one in\n> StartupProcessMain() and the other in InitPostgres(). As the logging\n> of the startup progress is required only during the startup process\n> and not in the other cases,\n\nThe InitPostgres() case occurs when the server is started in bootstrap\nmode (during initdb) or in single-user mode (postgres --single). I do\nnot see any reason why we shouldn't produce progress messages in at\nleast the latter case. I suspect that someone who is in the rather\ndesperate scenario of having to use single-user mode would really like\nto know how long the server is going to take to start up.\n\nPerhaps during initdb we don't want messages, but I'm not sure that we\nneed to do anything about that here. None of the messages that the\nserver normally produces show up when you run initdb, so I guess they\nare getting redirected to /dev/null or something.\n\nSo I don't think that using AmStartupProcess() for this purpose is right.\n\nOn Tue, Sep 7, 2021 at 10:58 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Sep 03, 2021 at 09:23:27PM -0500, Justin Pryzby wrote:\n> > On Fri, Sep 03, 2021 at 01:23:56PM +0530, Nitin Jadhav wrote:\n> > > Please find the updated patch attached.\n> >\n> > Please check CA+TgmoZtbqxaOLdpNkBcDbz=41tWALA8kpH4M=RWtPYHC7-KNg@mail.gmail.com\n> >\n> > I agree with Robert that a standby server should not continuously show timing\n> > messages for WAL replay.\n>\n> Clearly. This should be limited to crash recovery, and maybe there\n> could be some checks to make sure that nothing is logged once a\n> consistent point is reached. Honestly, I don't see why we should have\n> a GUC for what's proposed here. A value too low would bloat the logs\n> with entries that are not that meaningful. A value too large would\n> just prevent access to some information wanted. Wouldn't it be better\n> to just pick up a value like 10s or 20s?\n>\n> Looking at v13..\n>\n> + {\"log_startup_progress_interval\", PGC_SIGHUP, LOGGING_WHEN,\n> + gettext_noop(\"Sets the time interval between each progress update \"\n> + \"of the startup process.\"),\n> + gettext_noop(\"0 logs all messages. -1 turns this feature off.\"),\n> + GUC_UNIT_MS,\n> The unit is incorrect here, as that would default to 10ms, contrary to\n> what the documentation says about 10s.\n> --\n> Michael\n\n\n", "msg_date": "Tue, 7 Sep 2021 12:54:53 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "> Looking over this version, I realized something I (or you) should have\n> noticed sooner: you've added the RegisterTimeout call to\n> InitPostgres(), but that's for things that are used by all backends,\n> and this is only used by the startup process. So it seems to me that\n> the right place is StartupProcessMain. That would have the further\n> advantage of allowing startup_progress_timeout_handler to be made\n> static. begin_startup_progress_phase() and\n> startup_progress_timeout_has_expired() are the actual API functions\n> though so they will need to remain extern.\n>\n> I agree with Robert that a standby server should not continuously show timing\n> messages for WAL replay.\n\nThe earlier discussion wrt this point is as follows.\n\n> > I also agree that this is the better place to do it. Hence modified\n> > the patch accordingly. The condition \"!AmStartupProcess()\" is added to\n> > differentiate whether the call is done from a startup process or some\n> > other process. Actually StartupXLOG() gets called in 2 places. one in\n> > StartupProcessMain() and the other in InitPostgres(). As the logging\n> > of the startup progress is required only during the startup process\n> > and not in the other cases,\n>\n> The InitPostgres() case occurs when the server is started in bootstrap\n> mode (during initdb) or in single-user mode (postgres --single). I do\n> not see any reason why we shouldn't produce progress messages in at\n> least the latter case. I suspect that someone who is in the rather\n> desperate scenario of having to use single-user mode would really like\n> to know how long the server is going to take to start up.\n>\n> Perhaps during initdb we don't want messages, but I'm not sure that we\n> need to do anything about that here. None of the messages that the\n> server normally produces show up when you run initdb, so I guess they\n> are getting redirected to /dev/null or something.\n>\n> So I don't think that using AmStartupProcess() for this purpose is right.\n\nSo as per the recent discussion, RegisterTimeout call should be\nremoved from InitPostgres() and the condition \"!AmStartupProcess()\" is\nto be added in begin_startup_progress_phase() and\nereport_startup_progress() to differentiate whether the call is from a\nstartup process or some other process. Kindly correct me if I am\nwrong.\n\n> Some doc comments:\n\nThanks for the suggestions. I will take care in the next patch.\n\n> Clearly. This should be limited to crash recovery, and maybe there\n> could be some checks to make sure that nothing is logged once a\n> consistent point is reached.\n\nThe purpose here is to show the progress of the operation if it is\ntaking longer than the interval set by the user until it completes the\noperation. Users should know what operation is happening in the\nbackground and to show the progress, displaying the elapsed time. So\naccording to me the consistent point is nothing but the end of the\noperation. Kindly let me know if you have something in mind and that\ncould be the better consistent point.\n\n> Honestly, I don't see why we should have\n> a GUC for what's proposed here. A value too low would bloat the logs\n> with entries that are not that meaningful. A value too large would\n> just prevent access to some information wanted. Wouldn't it be better\n> to just pick up a value like 10s or 20s?\n\nIt is difficult to finalise the value and use that value without\nproviding an option to change. If we choose one value (say 10s), it\nmay be too less for some users or too large for some other users. So I\nfeel it is better to provide an option to users so that they can\nchoose the value according to their need. Anyway the default value set\nfor this setting is 10s.\n\n> + {\"log_startup_progress_interval\", PGC_SIGHUP, LOGGING_WHEN,\n> + gettext_noop(\"Sets the time interval between each progress update \"\n> + \"of the startup process.\"),\n> + gettext_noop(\"0 logs all messages. -1 turns this feature off.\"),\n> + GUC_UNIT_MS,\n> The unit is incorrect here, as that would default to 10ms, contrary to\n> what the documentation says about 10s.\n\nKindly refer the previous few discussions wrt this point.\n\nOn Tue, Sep 7, 2021 at 10:58 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Sep 03, 2021 at 09:23:27PM -0500, Justin Pryzby wrote:\n> > On Fri, Sep 03, 2021 at 01:23:56PM +0530, Nitin Jadhav wrote:\n> > > Please find the updated patch attached.\n> >\n> > Please check CA+TgmoZtbqxaOLdpNkBcDbz=41tWALA8kpH4M=RWtPYHC7-KNg@mail.gmail.com\n> >\n> > I agree with Robert that a standby server should not continuously show timing\n> > messages for WAL replay.\n>\n> Clearly. This should be limited to crash recovery, and maybe there\n> could be some checks to make sure that nothing is logged once a\n> consistent point is reached. Honestly, I don't see why we should have\n> a GUC for what's proposed here. A value too low would bloat the logs\n> with entries that are not that meaningful. A value too large would\n> just prevent access to some information wanted. Wouldn't it be better\n> to just pick up a value like 10s or 20s?\n>\n> Looking at v13..\n>\n> + {\"log_startup_progress_interval\", PGC_SIGHUP, LOGGING_WHEN,\n> + gettext_noop(\"Sets the time interval between each progress update \"\n> + \"of the startup process.\"),\n> + gettext_noop(\"0 logs all messages. -1 turns this feature off.\"),\n> + GUC_UNIT_MS,\n> The unit is incorrect here, as that would default to 10ms, contrary to\n> what the documentation says about 10s.\n> --\n> Michael\n\n\n", "msg_date": "Tue, 7 Sep 2021 15:07:15 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Tue, Sep 07, 2021 at 03:07:15PM +0530, Nitin Jadhav wrote:\n> > Looking over this version, I realized something I (or you) should have\n> > noticed sooner: you've added the RegisterTimeout call to\n> > InitPostgres(), but that's for things that are used by all backends,\n> > and this is only used by the startup process. So it seems to me that\n> > the right place is StartupProcessMain. That would have the further\n> > advantage of allowing startup_progress_timeout_handler to be made\n> > static. begin_startup_progress_phase() and\n> > startup_progress_timeout_has_expired() are the actual API functions\n> > though so they will need to remain extern.\n> >\n> > I agree with Robert that a standby server should not continuously show timing\n> > messages for WAL replay.\n> \n> The earlier discussion wrt this point is as follows.\n\nI think you're confusing discussions.\n\nRobert was talking about initdb/bootstrap/single, and I separately and\nindependently asked about hot standbys. It seems like Robert and I agreed\nabout the desired behavior and there was no further discussion.\n\n> > Honestly, I don't see why we should have\n> > a GUC for what's proposed here. A value too low would bloat the logs\n> > with entries that are not that meaningful. A value too large would\n> > just prevent access to some information wanted. Wouldn't it be better\n> > to just pick up a value like 10s or 20s?\n\nI don't think bloating logs is a issue for values > 10sec.\n\nYou agreed that it's important to choose the \"right\" value, but I think that\nwill vary between users. Some installations with large checkpoint_timeout\nmight anticipate taking 15+min to perform recovery, but others might even have\na strict requirement that recovery must not take more than (say) 10sec; someone\nmight want to use this to verify that, or to optimize the slow parts of\nrecovery, with an interval that someone else might not care about.\n\n> > + {\"log_startup_progress_interval\", PGC_SIGHUP, LOGGING_WHEN,\n> > + gettext_noop(\"Sets the time interval between each progress update \"\n> > + \"of the startup process.\"),\n> > + gettext_noop(\"0 logs all messages. -1 turns this feature off.\"),\n> > + GUC_UNIT_MS,\n|+ 10, -1, INT_MAX,\n> > The unit is incorrect here, as that would default to 10ms, contrary to\n> > what the documentation says about 10s.\n> \n> Kindly refer the previous few discussions wrt this point.\n\nYou copied and pasted unrelated emails, which isn't helpful.\n\nMichael is right. You updated some of the units based on Robert's suggestion\nto use MS, but didn't update all of the corresponding parts of the patch.\nguc.c says that the units are in MS, which means that unqualified values are\ninterpretted as such. But postgresql.conf.sample still says \"seconds\", and\nguc.c says the default value is \"10\", and you still do:\n\n+ interval_in_ms = log_startup_progress_interval * 1000;\n\nI checked that this currently does not interpret the value as ms:\n|./tmp_install/usr/local/pgsql/bin/postgres -D src/test/regress/tmp_check/data/ -c log_startup_progress_interval=1\n|2021-09-07 06:28:58.694 CDT startup[18865] LOG: redo in progress, elapsed time: 1.00 s, current LSN: 0/E94ED88\n|2021-09-07 06:28:59.694 CDT startup[18865] LOG: redo in progress, elapsed time: 2.00 s, current LSN: 0/10808EE0\n|2021-09-07 06:29:00.694 CDT startup[18865] LOG: redo in progress, elapsed time: 3.00 s, current LSN: 0/126B8C80\n\n(Also, the GUC value is in the range 0..INT_MAX, so multiplying and storing to\nanother int could overflow.)\n\nI think the convention is to for GUC global vars to be initialized with the\nsame default as in guc.c, so both should be 10000, like:\n\n+int log_startup_progress_interval = 10 * 1000 /* 10sec */\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 7 Sep 2021 06:49:03 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "> I think you're confusing discussions.\n>\n> Robert was talking about initdb/bootstrap/single, and I separately and\n> independently asked about hot standbys. It seems like Robert and I agreed\n> about the desired behavior and there was no further discussion.\n\nSorry for including 2 separate points into one.\n\n> Looking over this version, I realized something I (or you) should have\n> noticed sooner: you've added the RegisterTimeout call to\n> InitPostgres(), but that's for things that are used by all backends,\n> and this is only used by the startup process. So it seems to me that\n> the right place is StartupProcessMain. That would have the further\n> advantage of allowing startup_progress_timeout_handler to be made\n> static. begin_startup_progress_phase() and\n> startup_progress_timeout_has_expired() are the actual API functions\n> though so they will need to remain extern.\n\nThe earlier discussion wrt this point is as follows.\n\n> > I also agree that this is the better place to do it. Hence modified\n> > the patch accordingly. The condition \"!AmStartupProcess()\" is added to\n> > differentiate whether the call is done from a startup process or some\n> > other process. Actually StartupXLOG() gets called in 2 places. one in\n> > StartupProcessMain() and the other in InitPostgres(). As the logging\n> > of the startup progress is required only during the startup process\n> > and not in the other cases,\n>\n> The InitPostgres() case occurs when the server is started in bootstrap\n> mode (during initdb) or in single-user mode (postgres --single). I do\n> not see any reason why we shouldn't produce progress messages in at\n> least the latter case. I suspect that someone who is in the rather\n> desperate scenario of having to use single-user mode would really like\n> to know how long the server is going to take to start up.\n>\n> Perhaps during initdb we don't want messages, but I'm not sure that we\n> need to do anything about that here. None of the messages that the\n> server normally produces show up when you run initdb, so I guess they\n> are getting redirected to /dev/null or something.\n>\n> So I don't think that using AmStartupProcess() for this purpose is right.\n\nThis point is really confusing. As per the earlier discussion we\nconcluded to include RegisterTimeout() call even in case of\nInitPostgres() to support logging in case of single-user mode. Now if\nwe remove the RegisterTimeout() call from InitPostgres(), we are not\ngoing to support that anymore. Is this what you're trying to convey?\nor we should add some checks and disable the code to RegisterTimeout()\nif it is other than single-user mode. I have added a check if\n(!IsPostmasterEnvironment) in the attached patch for this scenario.\nKindly confirm my understanding.\n\n> > Should this feature distinguish between crash recovery and archive recovery on\n> > a hot standby ? Otherwise the standby will display this all the time.\n> >\n> >2021-08-14 16:13:33.139 CDT startup[11741] LOG: redo in progress, elapsed time: 124.42 s, current LSN: 0/EEE2100\n> >\n> >If so, I think maybe you'd check !InArchiveRecovery (but until Robert finishes\n> > cleanup of xlog.c variables, I can't say that with much confidence).\n>\n> Hmm. My inclination is to think that on an actual standby, you\n> wouldn't want to get messages like this, but if you were doing a\n> point-in-time-recovery, then you would. So I think maybe\n> InArchiveRecovery is not the right thing. Perhaps StandbyMode?\n\nI also feel that the log messages should be recorded in case of\npoint-in-time-recovery. So I have added a check if (!StandbyMode) and\nverified the replication and point-in-time-recovery scenario.\n\n> > Some doc comments:\n>\n> Thanks for the suggestions. I will take care in the next patch.\n\nFixed.\n\n> Michael is right. You updated some of the units based on Robert's suggestion\n> to use MS, but didn't update all of the corresponding parts of the patch.\n> guc.c says that the units are in MS, which means that unqualified values are\n> interpretted as such. But postgresql.conf.sample still says \"seconds\", and\n> guc.c says the default value is \"10\", and you still do:\n>\n> + interval_in_ms = log_startup_progress_interval * 1000;\n>\n> I checked that this currently does not interpret the value as ms:\n> |./tmp_install/usr/local/pgsql/bin/postgres -D src/test/regress/tmp_check/data/ -c log_startup_progress_interval=1\n> |2021-09-07 06:28:58.694 CDT startup[18865] LOG: redo in progress, elapsed time: 1.00 s, current LSN: 0/E94ED88\n> |2021-09-07 06:28:59.694 CDT startup[18865] LOG: redo in progress, elapsed time: 2.00 s, current LSN: 0/10808EE0\n> |2021-09-07 06:29:00.694 CDT startup[18865] LOG: redo in progress, elapsed time: 3.00 s, current LSN: 0/126B8C80\n>\n> (Also, the GUC value is in the range 0..INT_MAX, so multiplying and storing to\n> another int could overflow.)\n>\n> I think the convention is to for GUC global vars to be initialized with the\n> same default as in guc.c, so both should be 10000, like:\n>\n> +int log_startup_progress_interval = 10 * 1000 /* 10sec */\n\nFollowing is the discussion done wrt this point. Kindly refer and\nshare your thoughts.\n\n> > > I suggest making the GUC GUC_UNIT_MS rather than GUC_UNIT_S, but\n> > > expressing the default in postgresl.conf.sample as 10s rather than\n> > > 10000ms. I tried values measured in milliseconds just for testing\n> > > purposes and didn't initially understand why it wasn't working. I\n> > > don't think there's any reason it can't work.\n> >\n> > As suggested, I have changed it to GUC_UNIT_MS and kept the default\n> > value to 10s. I would like to know the reason why it can't be\n> > GUC_UNIT_S as we are expressing the values in terms of seconds.\n>\n> I mean, it *could* be. There's just no advantage. Values in seconds\n> will work correctly either way. But values in milliseconds will only\n> work if it's GUC_UNIT_MS. It seems to me that it's better to make more\n> things work rather than fewer.\n\nThanks & Regards,\nNitin Jadhav\nOn Tue, Sep 7, 2021 at 5:19 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Tue, Sep 07, 2021 at 03:07:15PM +0530, Nitin Jadhav wrote:\n> > > Looking over this version, I realized something I (or you) should have\n> > > noticed sooner: you've added the RegisterTimeout call to\n> > > InitPostgres(), but that's for things that are used by all backends,\n> > > and this is only used by the startup process. So it seems to me that\n> > > the right place is StartupProcessMain. That would have the further\n> > > advantage of allowing startup_progress_timeout_handler to be made\n> > > static. begin_startup_progress_phase() and\n> > > startup_progress_timeout_has_expired() are the actual API functions\n> > > though so they will need to remain extern.\n> > >\n> > > I agree with Robert that a standby server should not continuously show timing\n> > > messages for WAL replay.\n> >\n> > The earlier discussion wrt this point is as follows.\n>\n> I think you're confusing discussions.\n>\n> Robert was talking about initdb/bootstrap/single, and I separately and\n> independently asked about hot standbys. It seems like Robert and I agreed\n> about the desired behavior and there was no further discussion.\n>\n> > > Honestly, I don't see why we should have\n> > > a GUC for what's proposed here. A value too low would bloat the logs\n> > > with entries that are not that meaningful. A value too large would\n> > > just prevent access to some information wanted. Wouldn't it be better\n> > > to just pick up a value like 10s or 20s?\n>\n> I don't think bloating logs is a issue for values > 10sec.\n>\n> You agreed that it's important to choose the \"right\" value, but I think that\n> will vary between users. Some installations with large checkpoint_timeout\n> might anticipate taking 15+min to perform recovery, but others might even have\n> a strict requirement that recovery must not take more than (say) 10sec; someone\n> might want to use this to verify that, or to optimize the slow parts of\n> recovery, with an interval that someone else might not care about.\n>\n> > > + {\"log_startup_progress_interval\", PGC_SIGHUP, LOGGING_WHEN,\n> > > + gettext_noop(\"Sets the time interval between each progress update \"\n> > > + \"of the startup process.\"),\n> > > + gettext_noop(\"0 logs all messages. -1 turns this feature off.\"),\n> > > + GUC_UNIT_MS,\n> |+ 10, -1, INT_MAX,\n> > > The unit is incorrect here, as that would default to 10ms, contrary to\n> > > what the documentation says about 10s.\n> >\n> > Kindly refer the previous few discussions wrt this point.\n>\n> You copied and pasted unrelated emails, which isn't helpful.\n>\n> Michael is right. You updated some of the units based on Robert's suggestion\n> to use MS, but didn't update all of the corresponding parts of the patch.\n> guc.c says that the units are in MS, which means that unqualified values are\n> interpretted as such. But postgresql.conf.sample still says \"seconds\", and\n> guc.c says the default value is \"10\", and you still do:\n>\n> + interval_in_ms = log_startup_progress_interval * 1000;\n>\n> I checked that this currently does not interpret the value as ms:\n> |./tmp_install/usr/local/pgsql/bin/postgres -D src/test/regress/tmp_check/data/ -c log_startup_progress_interval=1\n> |2021-09-07 06:28:58.694 CDT startup[18865] LOG: redo in progress, elapsed time: 1.00 s, current LSN: 0/E94ED88\n> |2021-09-07 06:28:59.694 CDT startup[18865] LOG: redo in progress, elapsed time: 2.00 s, current LSN: 0/10808EE0\n> |2021-09-07 06:29:00.694 CDT startup[18865] LOG: redo in progress, elapsed time: 3.00 s, current LSN: 0/126B8C80\n>\n> (Also, the GUC value is in the range 0..INT_MAX, so multiplying and storing to\n> another int could overflow.)\n>\n> I think the convention is to for GUC global vars to be initialized with the\n> same default as in guc.c, so both should be 10000, like:\n>\n> +int log_startup_progress_interval = 10 * 1000 /* 10sec */\n>\n> --\n> Justin", "msg_date": "Mon, 13 Sep 2021 20:32:54 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "> Michael is right. You updated some of the units based on Robert's suggestion\n> to use MS, but didn't update all of the corresponding parts of the patch.\n> guc.c says that the units are in MS, which means that unqualified values are\n> interpretted as such. But postgresql.conf.sample still says \"seconds\", and\n> guc.c says the default value is \"10\", and you still do:\n>\n> + interval_in_ms = log_startup_progress_interval * 1000;\n>\n> I checked that this currently does not interpret the value as ms:\n> |./tmp_install/usr/local/pgsql/bin/postgres -D src/test/regress/tmp_check/data/ -c log_startup_progress_interval=1\n> |2021-09-07 06:28:58.694 CDT startup[18865] LOG: redo in progress, elapsed time: 1.00 s, current LSN: 0/E94ED88\n> |2021-09-07 06:28:59.694 CDT startup[18865] LOG: redo in progress, elapsed time: 2.00 s, current LSN: 0/10808EE0\n> |2021-09-07 06:29:00.694 CDT startup[18865] LOG: redo in progress, elapsed time: 3.00 s, current LSN: 0/126B8C80\n>\n> (Also, the GUC value is in the range 0..INT_MAX, so multiplying and storing to\n> another int could overflow.)\n>\n> I think the convention is to for GUC global vars to be initialized with the\n> same default as in guc.c, so both should be 10000, like:\n>\n> +int log_startup_progress_interval = 10 * 1000 /* 10sec */\n\nThanks Justin for the detailed explanation. Done the necessary changes.\n\nPlease find the updated patch attached.\n\n\nOn Mon, Sep 13, 2021 at 8:32 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > I think you're confusing discussions.\n> >\n> > Robert was talking about initdb/bootstrap/single, and I separately and\n> > independently asked about hot standbys. It seems like Robert and I agreed\n> > about the desired behavior and there was no further discussion.\n>\n> Sorry for including 2 separate points into one.\n>\n> > Looking over this version, I realized something I (or you) should have\n> > noticed sooner: you've added the RegisterTimeout call to\n> > InitPostgres(), but that's for things that are used by all backends,\n> > and this is only used by the startup process. So it seems to me that\n> > the right place is StartupProcessMain. That would have the further\n> > advantage of allowing startup_progress_timeout_handler to be made\n> > static. begin_startup_progress_phase() and\n> > startup_progress_timeout_has_expired() are the actual API functions\n> > though so they will need to remain extern.\n>\n> The earlier discussion wrt this point is as follows.\n>\n> > > I also agree that this is the better place to do it. Hence modified\n> > > the patch accordingly. The condition \"!AmStartupProcess()\" is added to\n> > > differentiate whether the call is done from a startup process or some\n> > > other process. Actually StartupXLOG() gets called in 2 places. one in\n> > > StartupProcessMain() and the other in InitPostgres(). As the logging\n> > > of the startup progress is required only during the startup process\n> > > and not in the other cases,\n> >\n> > The InitPostgres() case occurs when the server is started in bootstrap\n> > mode (during initdb) or in single-user mode (postgres --single). I do\n> > not see any reason why we shouldn't produce progress messages in at\n> > least the latter case. I suspect that someone who is in the rather\n> > desperate scenario of having to use single-user mode would really like\n> > to know how long the server is going to take to start up.\n> >\n> > Perhaps during initdb we don't want messages, but I'm not sure that we\n> > need to do anything about that here. None of the messages that the\n> > server normally produces show up when you run initdb, so I guess they\n> > are getting redirected to /dev/null or something.\n> >\n> > So I don't think that using AmStartupProcess() for this purpose is right.\n>\n> This point is really confusing. As per the earlier discussion we\n> concluded to include RegisterTimeout() call even in case of\n> InitPostgres() to support logging in case of single-user mode. Now if\n> we remove the RegisterTimeout() call from InitPostgres(), we are not\n> going to support that anymore. Is this what you're trying to convey?\n> or we should add some checks and disable the code to RegisterTimeout()\n> if it is other than single-user mode. I have added a check if\n> (!IsPostmasterEnvironment) in the attached patch for this scenario.\n> Kindly confirm my understanding.\n>\n> > > Should this feature distinguish between crash recovery and archive recovery on\n> > > a hot standby ? Otherwise the standby will display this all the time.\n> > >\n> > >2021-08-14 16:13:33.139 CDT startup[11741] LOG: redo in progress, elapsed time: 124.42 s, current LSN: 0/EEE2100\n> > >\n> > >If so, I think maybe you'd check !InArchiveRecovery (but until Robert finishes\n> > > cleanup of xlog.c variables, I can't say that with much confidence).\n> >\n> > Hmm. My inclination is to think that on an actual standby, you\n> > wouldn't want to get messages like this, but if you were doing a\n> > point-in-time-recovery, then you would. So I think maybe\n> > InArchiveRecovery is not the right thing. Perhaps StandbyMode?\n>\n> I also feel that the log messages should be recorded in case of\n> point-in-time-recovery. So I have added a check if (!StandbyMode) and\n> verified the replication and point-in-time-recovery scenario.\n>\n> > > Some doc comments:\n> >\n> > Thanks for the suggestions. I will take care in the next patch.\n>\n> Fixed.\n>\n> > Michael is right. You updated some of the units based on Robert's suggestion\n> > to use MS, but didn't update all of the corresponding parts of the patch.\n> > guc.c says that the units are in MS, which means that unqualified values are\n> > interpretted as such. But postgresql.conf.sample still says \"seconds\", and\n> > guc.c says the default value is \"10\", and you still do:\n> >\n> > + interval_in_ms = log_startup_progress_interval * 1000;\n> >\n> > I checked that this currently does not interpret the value as ms:\n> > |./tmp_install/usr/local/pgsql/bin/postgres -D src/test/regress/tmp_check/data/ -c log_startup_progress_interval=1\n> > |2021-09-07 06:28:58.694 CDT startup[18865] LOG: redo in progress, elapsed time: 1.00 s, current LSN: 0/E94ED88\n> > |2021-09-07 06:28:59.694 CDT startup[18865] LOG: redo in progress, elapsed time: 2.00 s, current LSN: 0/10808EE0\n> > |2021-09-07 06:29:00.694 CDT startup[18865] LOG: redo in progress, elapsed time: 3.00 s, current LSN: 0/126B8C80\n> >\n> > (Also, the GUC value is in the range 0..INT_MAX, so multiplying and storing to\n> > another int could overflow.)\n> >\n> > I think the convention is to for GUC global vars to be initialized with the\n> > same default as in guc.c, so both should be 10000, like:\n> >\n> > +int log_startup_progress_interval = 10 * 1000 /* 10sec */\n>\n> Following is the discussion done wrt this point. Kindly refer and\n> share your thoughts.\n>\n> > > > I suggest making the GUC GUC_UNIT_MS rather than GUC_UNIT_S, but\n> > > > expressing the default in postgresl.conf.sample as 10s rather than\n> > > > 10000ms. I tried values measured in milliseconds just for testing\n> > > > purposes and didn't initially understand why it wasn't working. I\n> > > > don't think there's any reason it can't work.\n> > >\n> > > As suggested, I have changed it to GUC_UNIT_MS and kept the default\n> > > value to 10s. I would like to know the reason why it can't be\n> > > GUC_UNIT_S as we are expressing the values in terms of seconds.\n> >\n> > I mean, it *could* be. There's just no advantage. Values in seconds\n> > will work correctly either way. But values in milliseconds will only\n> > work if it's GUC_UNIT_MS. It seems to me that it's better to make more\n> > things work rather than fewer.\n>\n> Thanks & Regards,\n> Nitin Jadhav\n> On Tue, Sep 7, 2021 at 5:19 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Tue, Sep 07, 2021 at 03:07:15PM +0530, Nitin Jadhav wrote:\n> > > > Looking over this version, I realized something I (or you) should have\n> > > > noticed sooner: you've added the RegisterTimeout call to\n> > > > InitPostgres(), but that's for things that are used by all backends,\n> > > > and this is only used by the startup process. So it seems to me that\n> > > > the right place is StartupProcessMain. That would have the further\n> > > > advantage of allowing startup_progress_timeout_handler to be made\n> > > > static. begin_startup_progress_phase() and\n> > > > startup_progress_timeout_has_expired() are the actual API functions\n> > > > though so they will need to remain extern.\n> > > >\n> > > > I agree with Robert that a standby server should not continuously show timing\n> > > > messages for WAL replay.\n> > >\n> > > The earlier discussion wrt this point is as follows.\n> >\n> > I think you're confusing discussions.\n> >\n> > Robert was talking about initdb/bootstrap/single, and I separately and\n> > independently asked about hot standbys. It seems like Robert and I agreed\n> > about the desired behavior and there was no further discussion.\n> >\n> > > > Honestly, I don't see why we should have\n> > > > a GUC for what's proposed here. A value too low would bloat the logs\n> > > > with entries that are not that meaningful. A value too large would\n> > > > just prevent access to some information wanted. Wouldn't it be better\n> > > > to just pick up a value like 10s or 20s?\n> >\n> > I don't think bloating logs is a issue for values > 10sec.\n> >\n> > You agreed that it's important to choose the \"right\" value, but I think that\n> > will vary between users. Some installations with large checkpoint_timeout\n> > might anticipate taking 15+min to perform recovery, but others might even have\n> > a strict requirement that recovery must not take more than (say) 10sec; someone\n> > might want to use this to verify that, or to optimize the slow parts of\n> > recovery, with an interval that someone else might not care about.\n> >\n> > > > + {\"log_startup_progress_interval\", PGC_SIGHUP, LOGGING_WHEN,\n> > > > + gettext_noop(\"Sets the time interval between each progress update \"\n> > > > + \"of the startup process.\"),\n> > > > + gettext_noop(\"0 logs all messages. -1 turns this feature off.\"),\n> > > > + GUC_UNIT_MS,\n> > |+ 10, -1, INT_MAX,\n> > > > The unit is incorrect here, as that would default to 10ms, contrary to\n> > > > what the documentation says about 10s.\n> > >\n> > > Kindly refer the previous few discussions wrt this point.\n> >\n> > You copied and pasted unrelated emails, which isn't helpful.\n> >\n> > Michael is right. You updated some of the units based on Robert's suggestion\n> > to use MS, but didn't update all of the corresponding parts of the patch.\n> > guc.c says that the units are in MS, which means that unqualified values are\n> > interpretted as such. But postgresql.conf.sample still says \"seconds\", and\n> > guc.c says the default value is \"10\", and you still do:\n> >\n> > + interval_in_ms = log_startup_progress_interval * 1000;\n> >\n> > I checked that this currently does not interpret the value as ms:\n> > |./tmp_install/usr/local/pgsql/bin/postgres -D src/test/regress/tmp_check/data/ -c log_startup_progress_interval=1\n> > |2021-09-07 06:28:58.694 CDT startup[18865] LOG: redo in progress, elapsed time: 1.00 s, current LSN: 0/E94ED88\n> > |2021-09-07 06:28:59.694 CDT startup[18865] LOG: redo in progress, elapsed time: 2.00 s, current LSN: 0/10808EE0\n> > |2021-09-07 06:29:00.694 CDT startup[18865] LOG: redo in progress, elapsed time: 3.00 s, current LSN: 0/126B8C80\n> >\n> > (Also, the GUC value is in the range 0..INT_MAX, so multiplying and storing to\n> > another int could overflow.)\n> >\n> > I think the convention is to for GUC global vars to be initialized with the\n> > same default as in guc.c, so both should be 10000, like:\n> >\n> > +int log_startup_progress_interval = 10 * 1000 /* 10sec */\n> >\n> > --\n> > Justin", "msg_date": "Wed, 22 Sep 2021 19:59:17 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Wed, Sep 22, 2021 at 10:28 AM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> Thanks Justin for the detailed explanation. Done the necessary changes.\n\nNot really. The documentation here does not make a ton of sense:\n\n+ Sets the time interval between each progress update of the operations\n+ performed by the startup process. This produces the log messages for\n+ those operations which take longer than the specified\nduration. The unit\n+ used to specify the value is milliseconds. For example, if\nyou set it to\n+ <literal> 10s </literal>, then every <literal> 10s\n</literal>, a log is\n+ emitted indicating which operation is ongoing, and the\nelapsed time from\n+ the beginning of the operation. If the value is set to\n<literal> 0 </literal>,\n+ then all messages for such operations are logged. <literal>\n-1 </literal>\n+ disables the feature. The default value is <literal> 10s </literal>\n\nI really don't know what to say about this. You say that the time is\nmeasured in milliseconds, and then immediately turn around and say\n\"For example, if you set it to 10s\". Now we do expect that most people\nwill set it to intervals that are measured in seconds rather than\nmilliseconds, but saying that setting it to a value measured in\nseconds is an example of setting it in milliseconds is not logical. It\nalso looks pretty silly to say that if you set the value to 10s,\nsomething will happen every 10s. What else would anyone expect to\nhappen? You really need to give some thought to how to explain the\nbehavior in a way that is clear and logical but not overly wordy.\nAlso, please note that you've got spaces around the literals, which\ndoes not match the surrounding style and does not render properly in\nHTML.\n\nIt is also not logical to define 0 as meaning that \"all messages for\nsuch operations are logged\". What does that even mean? It makes sense\nfor something like log_autovacuum_min_duration, because there we are\ntalking about logging one message per operation, and we could log\nmessages for all operations or just some of them. Here we are talking\nabout the time between one message and the next, so talking about \"all\nmessages\" does not really seem to make a lot of sense. Experimentally,\nwhat 0 actually does is cause the system to spam log lines in a tight\nloop, which cannot be what anyone wants. I think you should make 0\nmean disabled, and a positive value mean log at that interval, and\ndisallow -1 altogether.\n\nAnd on that note, I tested your patch with\nlog_startup_progress_interval=-1 and found that -1 behaves just like\n0. In other words, contrary to what the documentation says, -1 does\nnot disable the feature. It instead behaves just like 0. It's really\nconfusing to me how you write documentation that says -1 has a certain\nbehavior without thinking about the fact that you haven't written any\ncode that would make -1 behave that way. And apparently you didn't\ntest it, either. It took me approximately 1 minute of testing to find\nthat this is broken, which really is not very much.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 23 Sep 2021 12:14:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "> I really don't know what to say about this. You say that the time is\n> measured in milliseconds, and then immediately turn around and say\n> \"For example, if you set it to 10s\". Now we do expect that most people\n> will set it to intervals that are measured in seconds rather than\n> milliseconds, but saying that setting it to a value measured in\n> seconds is an example of setting it in milliseconds is not logical.\n\nBased on the statement \"I suggest making the GUC GUC_UNIT_MS rather\nthan GUC_UNIT_S, but expressing the default in postgresl.conf.sample\nas 10s rather than 10000ms\", I have used the default value in the\npostgresl.conf.sample as 10s rather than 10000ms. So I just used the\nsame value in the example too in config.sgml. If it is really getting\nconfusing, I will change it to 100ms in config.sgml.\n\n> It also looks pretty silly to say that if you set the value to 10s,\n> something will happen every 10s. What else would anyone expect to\n> happen? You really need to give some thought to how to explain the\n> behavior in a way that is clear and logical but not overly wordy.\n\nAdded a few lines about that. \"For example, if you set it to 1000ms,\nthen it tries to emit a log every 1000ms. If the log message is not\navailable at every 100th millisecond, then there is a possibility of\ndelay in emitting the log. If the delay is more than a cycle or if the\nsystem clock gets set backwards then the next attempt is done based on\nthe last logging time, otherwise the delay gets adjusted in the next\nattempt.\"\n\nPlease correct the explanation if it does not meet your expectations.\n\n> Also, please note that you've got spaces around the literals, which\n> does not match the surrounding style and does not render properly in\n> HTML.\n\nFixed.\n\n> It is also not logical to define 0 as meaning that \"all messages for\n> such operations are logged\". What does that even mean? It makes sense\n> for something like log_autovacuum_min_duration, because there we are\n> talking about logging one message per operation, and we could log\n> messages for all operations or just some of them. Here we are talking\n> about the time between one message and the next, so talking about \"all\n> messages\" does not really seem to make a lot of sense. Experimentally,\n> what 0 actually does is cause the system to spam log lines in a tight\n> loop, which cannot be what anyone wants. I think you should make 0\n> mean disabled, and a positive value mean log at that interval, and\n> disallow -1 altogether.\n\nMade changes which indicate 0 mean disabled, > 0 mean interval in\nmillisecond and removed -1.\n\nPlease find the patch attached.\n\n\n\nOn Thu, Sep 23, 2021 at 9:44 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Sep 22, 2021 at 10:28 AM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> > Thanks Justin for the detailed explanation. Done the necessary changes.\n>\n> Not really. The documentation here does not make a ton of sense:\n>\n> + Sets the time interval between each progress update of the operations\n> + performed by the startup process. This produces the log messages for\n> + those operations which take longer than the specified\n> duration. The unit\n> + used to specify the value is milliseconds. For example, if\n> you set it to\n> + <literal> 10s </literal>, then every <literal> 10s\n> </literal>, a log is\n> + emitted indicating which operation is ongoing, and the\n> elapsed time from\n> + the beginning of the operation. If the value is set to\n> <literal> 0 </literal>,\n> + then all messages for such operations are logged. <literal>\n> -1 </literal>\n> + disables the feature. The default value is <literal> 10s </literal>\n>\n> I really don't know what to say about this. You say that the time is\n> measured in milliseconds, and then immediately turn around and say\n> \"For example, if you set it to 10s\". Now we do expect that most people\n> will set it to intervals that are measured in seconds rather than\n> milliseconds, but saying that setting it to a value measured in\n> seconds is an example of setting it in milliseconds is not logical. It\n> also looks pretty silly to say that if you set the value to 10s,\n> something will happen every 10s. What else would anyone expect to\n> happen? You really need to give some thought to how to explain the\n> behavior in a way that is clear and logical but not overly wordy.\n> Also, please note that you've got spaces around the literals, which\n> does not match the surrounding style and does not render properly in\n> HTML.\n>\n> It is also not logical to define 0 as meaning that \"all messages for\n> such operations are logged\". What does that even mean? It makes sense\n> for something like log_autovacuum_min_duration, because there we are\n> talking about logging one message per operation, and we could log\n> messages for all operations or just some of them. Here we are talking\n> about the time between one message and the next, so talking about \"all\n> messages\" does not really seem to make a lot of sense. Experimentally,\n> what 0 actually does is cause the system to spam log lines in a tight\n> loop, which cannot be what anyone wants. I think you should make 0\n> mean disabled, and a positive value mean log at that interval, and\n> disallow -1 altogether.\n>\n> And on that note, I tested your patch with\n> log_startup_progress_interval=-1 and found that -1 behaves just like\n> 0. In other words, contrary to what the documentation says, -1 does\n> not disable the feature. It instead behaves just like 0. It's really\n> confusing to me how you write documentation that says -1 has a certain\n> behavior without thinking about the fact that you haven't written any\n> code that would make -1 behave that way. And apparently you didn't\n> test it, either. It took me approximately 1 minute of testing to find\n> that this is broken, which really is not very much.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com", "msg_date": "Mon, 27 Sep 2021 16:57:20 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Mon, Sep 27, 2021 at 04:57:20PM +0530, Nitin Jadhav wrote:\n> > It is also not logical to define 0 as meaning that \"all messages for\n> > such operations are logged\". What does that even mean? It makes sense\n> > for something like log_autovacuum_min_duration, because there we are\n> > talking about logging one message per operation, and we could log\n> > messages for all operations or just some of them. Here we are talking\n> > about the time between one message and the next, so talking about \"all\n> > messages\" does not really seem to make a lot of sense. Experimentally,\n> > what 0 actually does is cause the system to spam log lines in a tight\n> > loop, which cannot be what anyone wants. I think you should make 0\n> > mean disabled, and a positive value mean log at that interval, and\n> > disallow -1 altogether.\n> \n> Made changes which indicate 0 mean disabled, > 0 mean interval in\n> millisecond and removed -1.\n> \n> Please find the patch attached.\n\nI think you misunderstood - Robert was saying that interval=0 doesn't work, not\nsuggesting that you write more documentation about it.\n\nAlso, I agree with Robert that the documentation is too verbose. I don't think\nyou need to talk about what happens if the clock goes backwards (It just needs\nto behave conveniently).\n\nLook at the other _duration statements for what they say about units.\n\"If this value is specified without units, it is taken as milliseconds.\"\nhttps://www.postgresql.org/docs/14/runtime-config-logging.html\n log_autovacuum_min_duration\n log_min_duration_statement\n\n>>It also looks pretty silly to say that if you set the value to 10s, something\n>>will happen every 10s. What else would anyone expect to happen?\n\n@Robert: that's consistent with existing documentation, even though it might\nseem obvious and silly to us.\n\n| For example, if you set this to 250ms then all automatic vacuums and analyzes that run 250ms or longer will be logged\n| For example, if you set it to 250ms then all SQL statements that run 250ms or longer will be logged\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 27 Sep 2021 08:32:44 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Mon, Sep 27, 2021 at 7:26 AM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> > I really don't know what to say about this. You say that the time is\n> > measured in milliseconds, and then immediately turn around and say\n> > \"For example, if you set it to 10s\". Now we do expect that most people\n> > will set it to intervals that are measured in seconds rather than\n> > milliseconds, but saying that setting it to a value measured in\n> > seconds is an example of setting it in milliseconds is not logical.\n>\n> Based on the statement \"I suggest making the GUC GUC_UNIT_MS rather\n> than GUC_UNIT_S, but expressing the default in postgresl.conf.sample\n> as 10s rather than 10000ms\", I have used the default value in the\n> postgresl.conf.sample as 10s rather than 10000ms. So I just used the\n> same value in the example too in config.sgml. If it is really getting\n> confusing, I will change it to 100ms in config.sgml.\n\nThat's really not what I'm complaining about. I think if we're going\nto give an example at all, 10ms is a better example than 100ms,\nbecause 10s is a value that people are more likely to find useful. But\nI'm not sure that it's necessary to mention a specific value, and if\nit is, I think it needs to be phrased in a less confusing way.\n\n> Made changes which indicate 0 mean disabled, > 0 mean interval in\n> millisecond and removed -1.\n\nWell, I see that -1 is now disallowed, and that's good as far as it\ngoes, but 0 still does not actually disable the feature. I don't\nunderstand why you posted the previous version of the patch without\ntesting that it works, and I even less understand why you are posting\nanother version without fixing the bug that I pointed out to you in\nthe last version.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 27 Sep 2021 11:49:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Mon, Sep 27, 2021 at 9:32 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >>It also looks pretty silly to say that if you set the value to 10s, something\n> >>will happen every 10s. What else would anyone expect to happen?\n>\n> @Robert: that's consistent with existing documentation, even though it might\n> seem obvious and silly to us.\n>\n> | For example, if you set this to 250ms then all automatic vacuums and analyzes that run 250ms or longer will be logged\n> | For example, if you set it to 250ms then all SQL statements that run 250ms or longer will be logged\n\nFair enough, but I still don't like it much. I tried my hand at\nrewriting this and came up with the attached:\n\n+ Sets the amount of time after which the startup process will log\n+ a message about a long-running operation that is still in progress,\n+ as well as the interval between further progress messages for that\n+ operation. This setting is applied separately to each operation.\n+ For example, if syncing the data directory takes 25 seconds and\n+ thereafter resetting unlogged relations takes 8 seconds, and if this\n+ setting has the default value of 10 seconds, then a messages will be\n+ logged for syncing the data directory after it has been in progress\n+ for 10 seconds and again after it has been in progress for 20 seconds,\n+ but nothing will be logged for resetting unlogged operations.\n+ A setting of <literal>0</literal> disables the feature.\n\nI prefer this to Nitin's version because I think it could be unclear\nto someone that the value applies separately to each operation,\nwhereas I don't think we need to document that we can't guarantee that\nthe messages will be perfectly on time - that's true of every kind of\nscheduled event in pretty much every computer system - or what happens\nif the system clock goes backwards. Those are things we should try to\nget right, as well as we can anyway, but we don't need to tell the\nuser that we got them right.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 27 Sep 2021 12:17:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "> +/*\n> + * Decides whether to log the startup progress or not based on whether the\n> + * timer is expired or not. Returns FALSE if the timer is not expired, otherwise\n> + * calculates the elapsed time and sets the respective out parameters secs and\n> + * usecs. Enables the timer for the next log message and returns TRUE.\n> + */\n> +bool\n> +startup_progress_timeout_has_expired(long *secs, int *usecs)\n\nI think this comment can be worded better. It says it \"decides\", but it\ndoesn't actually decide on any action to take -- it just reports whether\nthe timer expired or not, to allow its caller to make the decision. In\nsuch situations we just say something like \"Report whether startup\nprogress has caused a timeout, return true and rearm the timer if it\ndid, or just return false otherwise\"; and we don't indicate what the\nvalue is going to be used *for*. Then the caller can use the boolean\nreturn value to make a decision, such as whether something is going to\nbe logged. This function can be oblivious to details such as this:\n\n> +\t/* If the timeout has not occurred, then no need to log the details. */\n> +\tif (!startup_progress_timer_expired)\n> +\t\treturn false;\n\nhere we can just say \"No timeout has occurred\" and make no inference\nabout what's going to happen or not happen.\n\nAlso, for functions that do things like this we typically use English\nsentence structure with the function name starting with the verb --\nperhaps has_startup_progress_timeout_expired(). Sometimes we are lax\nabout this if we have some sort of poor-man's modularisation by using a\ncommon prefix for several functions doing related things, which perhaps\ncould be \"startup_progress_*\" in your case, but your other functions are\nalready not doing that (such as begin_startup_progress_phase) so it's\nnot clear why you would not use the most natural name for this one.\n\n> +\tereport_startup_progress(\"syncing data directory (syncfs), elapsed time: %ld.%02d s, current path: %s\",\n> +\t\t\t\t\t\t\t path);\n\nPlease make sure to add ereport_startup_progress() as a translation\ntrigger in src/backend/nls.mk.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 27 Sep 2021 13:47:38 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "> That's really not what I'm complaining about. I think if we're going\n> to give an example at all, 10ms is a better example than 100ms,\n> because 10s is a value that people are more likely to find useful. But\n> I'm not sure that it's necessary to mention a specific value, and if\n> it is, I think it needs to be phrased in a less confusing way.\n>\n> > >>It also looks pretty silly to say that if you set the value to 10s, something\n> > >>will happen every 10s. What else would anyone expect to happen?\n> >\n> > @Robert: that's consistent with existing documentation, even though it might\n> > seem obvious and silly to us.\n> >\n> > | For example, if you set this to 250ms then all automatic vacuums and analyzes that run 250ms or longer will be logged\n> > | For example, if you set it to 250ms then all SQL statements that run 250ms or longer will be logged\n>\n> Fair enough, but I still don't like it much. I tried my hand at\n> rewriting this and came up with the attached:\n>\n> + Sets the amount of time after which the startup process will log\n> + a message about a long-running operation that is still in progress,\n> + as well as the interval between further progress messages for that\n> + operation. This setting is applied separately to each operation.\n> + For example, if syncing the data directory takes 25 seconds and\n> + thereafter resetting unlogged relations takes 8 seconds, and if this\n> + setting has the default value of 10 seconds, then a messages will be\n> + logged for syncing the data directory after it has been in progress\n> + for 10 seconds and again after it has been in progress for 20 seconds,\n> + but nothing will be logged for resetting unlogged operations.\n> + A setting of <literal>0</literal> disables the feature.\n>\n> I prefer this to Nitin's version because I think it could be unclear\n> to someone that the value applies separately to each operation,\n> whereas I don't think we need to document that we can't guarantee that\n> the messages will be perfectly on time - that's true of every kind of\n> scheduled event in pretty much every computer system - or what happens\n> if the system clock goes backwards. Those are things we should try to\n> get right, as well as we can anyway, but we don't need to tell the\n> user that we got them right.\n\nI thought mentioning the unit in milliseconds and the example in\nseconds would confuse the user, so I changed the example to\nmilliseconds.The message behind the above description looks good to me\nhowever I feel some sentences can be explained in less words. The\ninformation related to the units is missing and I feel it should be\nmentioned in the document. The example says, if the setting has the\ndefault value of 10 seconds, then it explains the behaviour. I feel it\nmay not be the default value, it can be any value set by the user. So\nmentioning 'default' in the example does not look good to me. I feel\nwe just have to mention \"if this setting is set to the value of 10\nseconds\". Below is the modified version of the above information.\n\n+ Sets the amount of time after every such interval the startup process\n+ will log a message about a long-running operation that is still in\n+ progress. This setting is applied separately to each operation.\n+ For example, if syncing the data directory takes 25 seconds and\n+ thereafter resetting unlogged relations takes 8 seconds, and if this\n+ setting is set to the value of 10 seconds, then a messages will be\n+ logged for syncing the data directory after it has been in progress\n+ for 10 seconds and again after it has been in progress for 20 seconds,\n+ but nothing will be logged for resetting unlogged operations.\n+ A setting of <literal>0</literal> disables the feature. If this value\n+ is specified without units, it is taken as milliseconds.\n\n> Well, I see that -1 is now disallowed, and that's good as far as it\n> goes, but 0 still does not actually disable the feature. I don't\n> understand why you posted the previous version of the patch without\n> testing that it works, and I even less understand why you are posting\n> another version without fixing the bug that I pointed out to you in\n> the last version.\n\nI had added additional code to check the value of the\n'log_startup_progress_interval' variable and disable the feature in\ncase of -1 in the earlier versions of the patch (Specifically\nv9.patch). There was a review comment for v9 patch and it resulted in\nmajor refactoring of the patch. The comment was\n\n> With these changes you'd have only 1 place in the code that needs to\n> care about log_startup_progress_interval, as opposed to 3 as you have\n> it currently, and only one place that enables the timeout, as opposed\n> to 2 as you have it currently. I think that would be tidier.\n\nBased on the above comment and the idea behind enabling the timer, it\ndoes not log anything if the value is set to -1. So I thought there is\nno extra code necessary to disable the feature even though it executes\nthrough the code flow. So I did not worry about adding logic to\ndisable the feature. I will take care of this in the next patch.\n\nThe answer for the question of \"I don't understand why you posted the\nprevious version of the patch without testing that it works\" is, for\nthe value of -1, the above description was my understanding and for\nthe value of 0, the older versions of the patch was behaving as\ndocumented. But with the later versions the behaviour got changed and\nI missed to modify the documentation. So I modified it in the last\nversion.\n\nPlease share your thoughts.\n\nThanks & Regards,\nNitin Jadhav\nOn Mon, Sep 27, 2021 at 9:47 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Sep 27, 2021 at 9:32 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >>It also looks pretty silly to say that if you set the value to 10s, something\n> > >>will happen every 10s. What else would anyone expect to happen?\n> >\n> > @Robert: that's consistent with existing documentation, even though it might\n> > seem obvious and silly to us.\n> >\n> > | For example, if you set this to 250ms then all automatic vacuums and analyzes that run 250ms or longer will be logged\n> > | For example, if you set it to 250ms then all SQL statements that run 250ms or longer will be logged\n>\n> Fair enough, but I still don't like it much. I tried my hand at\n> rewriting this and came up with the attached:\n>\n> + Sets the amount of time after which the startup process will log\n> + a message about a long-running operation that is still in progress,\n> + as well as the interval between further progress messages for that\n> + operation. This setting is applied separately to each operation.\n> + For example, if syncing the data directory takes 25 seconds and\n> + thereafter resetting unlogged relations takes 8 seconds, and if this\n> + setting has the default value of 10 seconds, then a messages will be\n> + logged for syncing the data directory after it has been in progress\n> + for 10 seconds and again after it has been in progress for 20 seconds,\n> + but nothing will be logged for resetting unlogged operations.\n> + A setting of <literal>0</literal> disables the feature.\n>\n> I prefer this to Nitin's version because I think it could be unclear\n> to someone that the value applies separately to each operation,\n> whereas I don't think we need to document that we can't guarantee that\n> the messages will be perfectly on time - that's true of every kind of\n> scheduled event in pretty much every computer system - or what happens\n> if the system clock goes backwards. Those are things we should try to\n> get right, as well as we can anyway, but we don't need to tell the\n> user that we got them right.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 28 Sep 2021 17:37:00 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Tue, Sep 28, 2021 at 8:06 AM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> I thought mentioning the unit in milliseconds and the example in\n> seconds would confuse the user, so I changed the example to\n> milliseconds.The message behind the above description looks good to me\n> however I feel some sentences can be explained in less words. The\n> information related to the units is missing and I feel it should be\n> mentioned in the document. The example says, if the setting has the\n> default value of 10 seconds, then it explains the behaviour. I feel it\n> may not be the default value, it can be any value set by the user. So\n> mentioning 'default' in the example does not look good to me. I feel\n> we just have to mention \"if this setting is set to the value of 10\n> seconds\". Below is the modified version of the above information.\n\nIt is common to mention what the default is as part of the\ndocumentation of a GUC. I don't see why this one should be an\nexception, especially since not mentioning it reduces the length of\nthe documentation by exactly one word.\n\nI don't mind the sentence you added at the end to clarify the default\nunits, but the way you've rewritten the first sentence makes it, in my\nopinion, much less clear.\n\n> I had added additional code to check the value of the\n> 'log_startup_progress_interval' variable and disable the feature in\n> case of -1 in the earlier versions of the patch (Specifically\n> v9.patch). There was a review comment for v9 patch and it resulted in\n> major refactoring of the patch.\n...\n> The answer for the question of \"I don't understand why you posted the\n> previous version of the patch without testing that it works\" is, for\n> the value of -1, the above description was my understanding and for\n> the value of 0, the older versions of the patch was behaving as\n> documented. But with the later versions the behaviour got changed and\n> I missed to modify the documentation. So I modified it in the last\n> version.\n\nv9 was posted on August 3rd. I told you that it wasn't working on\nSeptember 23rd. You posted a new version that still does not work on\nSeptember 27th. I think you should have tested each version of your\npatch before posting it, and especially after any major refactorings.\nAnd if for whatever reason you didn't, then certainly after I told you\non September 23rd that it didn't work, I think you should have fixed\nit before posting a new version.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 28 Sep 2021 10:59:27 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "> It is common to mention what the default is as part of the\n> documentation of a GUC. I don't see why this one should be an\n> exception, especially since not mentioning it reduces the length of\n> the documentation by exactly one word.\n>\n> I don't mind the sentence you added at the end to clarify the default\n> units, but the way you've rewritten the first sentence makes it, in my\n> opinion, much less clear.\n\nOk. I have kept your documentation as it is and added the sentence at\nthe end to clarify the default units.\n\n> v9 was posted on August 3rd. I told you that it wasn't working on\n> September 23rd. You posted a new version that still does not work on\n> September 27th. I think you should have tested each version of your\n> patch before posting it, and especially after any major refactorings.\n> And if for whatever reason you didn't, then certainly after I told you\n> on September 23rd that it didn't work, I think you should have fixed\n> it before posting a new version.\n\nSorry. There was a misunderstanding about this and for the patch\nshared on September 27th, I had tested for the value '0' and observed\nthat no progress messages were getting logged, probably the time at\nwhich 'enable_timeout_at' is getting called is past the 'next_timeout'\nvalue. This behaviour is completely dependent on the system. Now added\nan extra condition to disable the feature in case of '0' setting.\n\n> I think this comment can be worded better. It says it \"decides\", but it\n> doesn't actually decide on any action to take -- it just reports whether\n> the timer expired or not, to allow its caller to make the decision. In\n> such situations we just say something like \"Report whether startup\n> progress has caused a timeout, return true and rearm the timer if it\n> did, or just return false otherwise\"; and we don't indicate what the\n> value is going to be used *for*. Then the caller can use the boolean\n> return value to make a decision, such as whether something is going to\n> be logged. This function can be oblivious to details such as this:\n>\n> here we can just say \"No timeout has occurred\" and make no inference\n> about what's going to happen or not happen.\n\nModified the comment.\n\n> Also, for functions that do things like this we typically use English\n> sentence structure with the function name starting with the verb --\n> perhaps has_startup_progress_timeout_expired(). Sometimes we are lax\n> about this if we have some sort of poor-man's modularisation by using a\n> common prefix for several functions doing related things, which perhaps\n> could be \"startup_progress_*\" in your case, but your other functions are\n> already not doing that (such as begin_startup_progress_phase) so it's\n> not clear why you would not use the most natural name for this one.\n\nI agree that has_startup_progress_timeout_expired() is better than the\nexisting function name. So I changed the function name accordingly.\n\n> Please make sure to add ereport_startup_progress() as a translation\n> trigger in src/backend/nls.mk.\n\nI have added ereport_startup_progress() under the section\nGETTEXT_TRIGGERS and GETTEXT_FLAGS in src/backend/nls.mk. Also\nverified the messages in src/backend/po/postgres.pot file.\n\nKindly let me know if I have missed anything.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Tue, Sep 28, 2021 at 8:29 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Sep 28, 2021 at 8:06 AM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> > I thought mentioning the unit in milliseconds and the example in\n> > seconds would confuse the user, so I changed the example to\n> > milliseconds.The message behind the above description looks good to me\n> > however I feel some sentences can be explained in less words. The\n> > information related to the units is missing and I feel it should be\n> > mentioned in the document. The example says, if the setting has the\n> > default value of 10 seconds, then it explains the behaviour. I feel it\n> > may not be the default value, it can be any value set by the user. So\n> > mentioning 'default' in the example does not look good to me. I feel\n> > we just have to mention \"if this setting is set to the value of 10\n> > seconds\". Below is the modified version of the above information.\n>\n> It is common to mention what the default is as part of the\n> documentation of a GUC. I don't see why this one should be an\n> exception, especially since not mentioning it reduces the length of\n> the documentation by exactly one word.\n>\n> I don't mind the sentence you added at the end to clarify the default\n> units, but the way you've rewritten the first sentence makes it, in my\n> opinion, much less clear.\n>\n> > I had added additional code to check the value of the\n> > 'log_startup_progress_interval' variable and disable the feature in\n> > case of -1 in the earlier versions of the patch (Specifically\n> > v9.patch). There was a review comment for v9 patch and it resulted in\n> > major refactoring of the patch.\n> ...\n> > The answer for the question of \"I don't understand why you posted the\n> > previous version of the patch without testing that it works\" is, for\n> > the value of -1, the above description was my understanding and for\n> > the value of 0, the older versions of the patch was behaving as\n> > documented. But with the later versions the behaviour got changed and\n> > I missed to modify the documentation. So I modified it in the last\n> > version.\n>\n> v9 was posted on August 3rd. I told you that it wasn't working on\n> September 23rd. You posted a new version that still does not work on\n> September 27th. I think you should have tested each version of your\n> patch before posting it, and especially after any major refactorings.\n> And if for whatever reason you didn't, then certainly after I told you\n> on September 23rd that it didn't work, I think you should have fixed\n> it before posting a new version.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com", "msg_date": "Wed, 29 Sep 2021 19:39:34 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "So, I've wondered about this part all along:\n\n> +/*\n> + * Calculates the timestamp at which the next timer should expire and enables\n> + * the timer accordingly.\n> + */\n> +static void\n> +reset_startup_progress_timeout(TimestampTz now)\n> +{\n> +\tTimestampTz next_timeout;\n> +\n> +\tnext_timeout = TimestampTzPlusMilliseconds(scheduled_startup_progress_timeout,\n> +\t\t\t\t\t\t\t\t\t\t\t log_startup_progress_interval);\n> +\tif (next_timeout < now)\n> +\t{\n> +\t\t/*\n> +\t\t * Either the timeout was processed so late that we missed an\n> +\t\t * entire cycle or system clock was set backwards.\n> +\t\t */\n> +\t\tnext_timeout = TimestampTzPlusMilliseconds(now, log_startup_progress_interval);\n> +\t}\n> +\n> +\tenable_timeout_at(STARTUP_PROGRESS_TIMEOUT, next_timeout);\n\nWhy is it that we set the next timeout to fire not at \"now + interval\"\nbut at \"when-it-should-have-fired-but-didn't + interval\"? As a user, if\nI request a message to be logged every N milliseconds, and one\nof them is a little bit delayed, then (assuming I set it to 10s) I still\nexpect the next one to occur at now+10s. I don't expect the next at\n\"now+5s\" if one is delayed 5s.\n\nIn other words, I think this function should just be\n enable_timeout_after(STARTUP_PROGRESS_TIMEOUT, log_startup_progress_interval);\n\nThis means you can remove the scheduled_startup_progress_timeout\nvariable.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"No hay ausente sin culpa ni presente sin disculpa\" (Prov. francés)\n\n\n", "msg_date": "Wed, 29 Sep 2021 14:36:14 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Wed, Sep 29, 2021 at 10:08 AM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> Sorry. There was a misunderstanding about this and for the patch\n> shared on September 27th, I had tested for the value '0' and observed\n> that no progress messages were getting logged, probably the time at\n> which 'enable_timeout_at' is getting called is past the 'next_timeout'\n> value. This behaviour is completely dependent on the system. Now added\n> an extra condition to disable the feature in case of '0' setting.\n\nOh, interesting. I failed to consider that the behavior might vary\nfrom one system to another.\n\nI just noticed something else which I realize is the indirect result\nof my own suggestion but which doesn't actually look all that nice.\nYou've now got a call to RegisterTimeout(STARTUP_PROGRESS_TIMEOUT,\n...) in InitPostgres, guarded by ! IsPostmasterEnvironment, and then\nanother one in StartupProcessMain(). I think that's so that the\nfeature works in single-user mode, but that means that technically,\nwe're not reporting on the progress of the startup process. We're\nreporting progress on the startup operations that are normally\nperformed by the startup process. But that means that the\ndocumentation isn't quite accurate (because it mentions the startup\nprocess specifically) and that the placement of the code in startup.c\nis suspect (because that's specifically for the startup process) and\nthat basically every instance of the word \"process\" in the patch is\ntechnically a little bit wrong. I'm not sure if that's a big enough\nproblem to be worth worrying about or exactly what we ought to do\nabout it, but it doesn't seem fantastic. At a minimum, I think we\nshould probably try to eliminate as many references to the startup\nprocess as we can, and talk about startup progress or startup\noperations or something like that.\n\n+ * Start timestamp of the operation that occur during startup process.\n\nThis is not correct grammar - it would need to be \"operations that\noccur\" or \"operation that occurs\". But neither seems particularly\nclear about what the variable actually does. How about \"Time at which\nthe most recent startup operation started\"?\n\n+ * Indicates the timestamp at which the timer was supposed to expire.\n\n\"Indicates\" can be deleted, but also I think it would be better to\nstate the purpose of the timer i.e. \"Timestamp at which the next\nstartup progress message should be logged.\"\n\n+ enable_timeout_at(STARTUP_PROGRESS_TIMEOUT, next_timeout);\n+ scheduled_startup_progress_timeout = next_timeout;\n+ startup_progress_timer_expired = false;\n\nI think you should set startup_progress_timer_expired to false before\ncalling enable_timeout_at. Otherwise there's a race condition. It's\nunlikely that the timer could expire and the interrupt handler fire\nbefore we reach startup_progress_timer_expired = false, but it seems\nlike there's no reason to take a chance.\n\n+ * Calculates the timestamp at which the next timer should expire and enables\n\nSo in some places you have verbs with an \"s\" on the end, like here,\nand in other places without, like in the next example. In \"telegraph\nstyle\" comments like this, this implicit subject is \"it\" or \"this,\"\nbut you don't write that. However you write the rest of the sentence\nas if it were there. So this should say \"calculate\" and \"enable\"\nrather than \"calculates\" and \"enables\".\n\n+ * Schedule a wakeup call for logging the progress of startup process.\n\nThis isn't really an accurate description, I think. It's not\nscheduling anything, and I don't know what a \"wakeup call\" is anyway.\n\"Set a flag indicating that it's time to log a progress report\" might\nbe better.\n\n+ * Sets the start timestamp of the current operation and also enables the\n\nSet. enable.\n\n+ * timeout for logging the progress of startup process.\n\nI think you could delete \"for logging the progress of startup process\"\nhere; that seems clear enough, and this reads a bit awkwardly. If you\nwant to keep something like this I wrote write \"...enable the timeout\nso that the progress of the startup progress will be logged.\"\n\n+ * the timer if it did, otheriwse return false.\n\notherwise\n\n+ * Begin the startup progress phase to report the progress of syncing\n+ * data directory (syncfs).\n\nAll of the comments that start with \"Begin the startup progress phase\"\nshould instead start with \"Begin startup progress phase\".\n\nI think this could be condensed down to \"Prepare to report progress\nsyncing the data directory via syncfs\", and likewise \"Prepare to\nreport progress of the pre-fsync phase\", \"Prepare to report progress\nresetting unlogged relations,\" etc.\n\n+ gettext_noop(\"Sets the time interval between each progress update \"\n+ \"of the startup process.\"),\n\nThis is actually inaccurate. It's sort of the same problem I was\nworried about with respect to the documentation: it's NOT the interval\nbetween progress updates, because it applies separately to each\noperation. We need to say something that makes that clear, or at least\nthat doesn't get overtly the opposite impression. It's hard to do that\nbriefly, but maybe something like \"Time between progress updates for\nlong-running startup operations\"?\n\nWhatever we use here could also be the comment for\nlog_startup_progress_interval.\n\n+ * Logs the startup progress message if the timer has expired.\n\nthe -> a\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 29 Sep 2021 13:40:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Wed, Sep 29, 2021 at 1:36 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Why is it that we set the next timeout to fire not at \"now + interval\"\n> but at \"when-it-should-have-fired-but-didn't + interval\"? As a user, if\n> I request a message to be logged every N milliseconds, and one\n> of them is a little bit delayed, then (assuming I set it to 10s) I still\n> expect the next one to occur at now+10s. I don't expect the next at\n> \"now+5s\" if one is delayed 5s.\n\nWell, this was my suggestion, because if you don't do this, you get\ndrift, which I think looks weird. Like the timestamps will be:\n\n13:41:05.012456\n13:41:15.072484\n13:41:25.149632\n\n...and it gets further and further off as it goes on.'\n\nI guess my expectation is different from yours: I expect that if I ask\nfor a message every 10 seconds, the time between messages is going to\nbe 10s, at least on average, not 10s + however much latency the system\nhas.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 29 Sep 2021 13:43:58 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Wed, Sep 29, 2021 at 02:36:14PM -0300, Alvaro Herrera wrote:\n> Why is it that we set the next timeout to fire not at \"now + interval\"\n> but at \"when-it-should-have-fired-but-didn't + interval\"? As a user, if\n> I request a message to be logged every N milliseconds, and one\n> of them is a little bit delayed, then (assuming I set it to 10s) I still\n> expect the next one to occur at now+10s. I don't expect the next at\n> \"now+5s\" if one is delayed 5s.\n> \n> In other words, I think this function should just be\n> enable_timeout_after(STARTUP_PROGRESS_TIMEOUT, log_startup_progress_interval);\n> \n> This means you can remove the scheduled_startup_progress_timeout\n> variable.\n\nRobert requested the current behavior here.\nhttps://www.postgresql.org/message-id/CA%2BTgmoYkS1ZeWdSMFMBecMNxWonHk6J5eoX4FEQrpKtvEbXqGQ%40mail.gmail.com\n\nIt's confusing (at least) to get these kind of requests to change the behavior\nback and forth.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 29 Sep 2021 12:45:30 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On 2021-Sep-29, Robert Haas wrote:\n\n> Well, this was my suggestion, because if you don't do this, you get\n> drift, which I think looks weird. Like the timestamps will be:\n> \n> 13:41:05.012456\n> 13:41:15.072484\n> 13:41:25.149632\n> \n> ...and it gets further and further off as it goes on.'\n\nRight ... I actually *expect* this drift to occur. Maybe people\ngenerally don't like this, it just seems natural to me. Are there other\nopinions on this aspect?\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Nadie está tan esclavizado como el que se cree libre no siéndolo\" (Goethe)\n\n\n", "msg_date": "Wed, 29 Sep 2021 14:49:31 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On 2021-Sep-29, Justin Pryzby wrote:\n\n> Robert requested the current behavior here.\n> https://www.postgresql.org/message-id/CA%2BTgmoYkS1ZeWdSMFMBecMNxWonHk6J5eoX4FEQrpKtvEbXqGQ%40mail.gmail.com\n> \n> It's confusing (at least) to get these kind of requests to change the behavior\n> back and forth.\n\nWell, I did scan the thread to see if this had been discussed, and I\noverlooked that message. But there was no reply to that message, so\nit's not clear whether this was just Robert's opinion or consensus; in\nfact we now have exactly two votes on it (mine and Robert's).\n\nI think one person casting an opinion on one aspect does not set that\naspect in stone.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"It takes less than 2 seconds to get to 78% complete; that's a good sign.\nA few seconds later it's at 90%, but it seems to have stuck there. Did\nsomebody make percentages logarithmic while I wasn't looking?\"\n http://smylers.hates-software.com/2005/09/08/1995c749.html\n\n\n", "msg_date": "Wed, 29 Sep 2021 14:52:46 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Sep-29, Robert Haas wrote:\n>> Well, this was my suggestion, because if you don't do this, you get\n>> drift, which I think looks weird. Like the timestamps will be:\n>> \n>> 13:41:05.012456\n>> 13:41:15.072484\n>> 13:41:25.149632\n>> \n>> ...and it gets further and further off as it goes on.'\n\n> Right ... I actually *expect* this drift to occur. Maybe people\n> generally don't like this, it just seems natural to me. Are there other\n> opinions on this aspect?\n\nFWIW, I agree with Robert that it's nicer if the timeout doesn't drift.\nThere's a limit to how much complexity I'm willing to tolerate for that,\nbut it doesn't seem like this exceeds it.\n\nThe real comment I'd have here, though, is that writing one-off\ncode for this purpose is bad. If we have a need for a repetitive\ntimeout, it'd be better to add the feature to timeout.c explicitly.\nThat would probably also remove the need for extra copies of the\ntimeout time.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 29 Sep 2021 14:06:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Wed, Sep 29, 2021 at 1:52 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> I think one person casting an opinion on one aspect does not set that\n> aspect in stone.\n\nOf course not. I was just explaining that how the patch ended up like it did.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 29 Sep 2021 14:43:53 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Wed, Sep 29, 2021 at 2:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The real comment I'd have here, though, is that writing one-off\n> code for this purpose is bad. If we have a need for a repetitive\n> timeout, it'd be better to add the feature to timeout.c explicitly.\n> That would probably also remove the need for extra copies of the\n> timeout time.\n\nI'm not sure that really helps very much, honestly. I mean it would\nbe useful in this particular case, but there are other cases where we\nhave logic like this already, and this wouldn't do anything about\nthose. For example, consider autoprewarm_main(). Like this code, that\ncode thinks (perhaps just because I'm the one who reviewed it) that\nthe next time should be measured from the last time ... but an\nenhancement to the timeout machinery wouldn't help it at all. I\nsuspect there are other cases like this elsewhere, because this is\nwhat I personally tend to think is the right behavior and I feel like\nit comes up in patch reviews from time to time, but I'm not finding\nany at the moment. Even if I'm right that they exist, I'm not sure\nthey look much like each other or can easily reuse any code.\n\nAnd then again on the other hand, BackgroundWriterMain() thinks that\nthe next time should be measured from the time we got around to doing\nit, not the scheduled time. I guess we don't really have any\nconsistent practice here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 29 Sep 2021 16:59:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Sep 29, 2021 at 2:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The real comment I'd have here, though, is that writing one-off\n>> code for this purpose is bad. If we have a need for a repetitive\n>> timeout, it'd be better to add the feature to timeout.c explicitly.\n>> That would probably also remove the need for extra copies of the\n>> timeout time.\n\n> I'm not sure that really helps very much, honestly.\n\nI didn't claim there are any other places that could use the feature\n*today*. But once we've got one, it seems like there could be more\ntomorrow. In any case, I dislike keeping timeout state data outside\ntimeout.c, because it's so likely to get out-of-sync that way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 29 Sep 2021 17:12:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Wed, Sep 29, 2021 at 5:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I didn't claim there are any other places that could use the feature\n> *today*. But once we've got one, it seems like there could be more\n> tomorrow. In any case, I dislike keeping timeout state data outside\n> timeout.c, because it's so likely to get out-of-sync that way.\n\nWell, I had a quick go at implementing this (attached).\n\nIt seems to do a satisfactory job preventing drift over time, but it\ndoesn't behave nicely if you set the system clock backward. With a bit\nof extra debugging output:\n\n2021-09-30 14:23:50.291 EDT [2279] LOG: scheduling wakeup in 2 secs,\n998727 usecs\n2021-09-30 14:23:53.291 EDT [2279] LOG: scheduling wakeup in 2 secs,\n998730 usecs\n2021-09-30 14:23:56.291 EDT [2279] LOG: scheduling wakeup in 2 secs,\n998728 usecs\n2021-09-30 14:20:01.154 EDT [2279] LOG: scheduling wakeup in 238\nsecs, 135532 usecs\n2021-09-30 14:23:59.294 EDT [2279] LOG: scheduling wakeup in 2 secs, 995922 use\n\nThe issue here is that fin_time is really the driving force behind\neverything timeout.c does. In particular, it determines the order of\nactive_timeouts[]. And that's not really correct either for\nenable_timeout_after(), or for the new function I added in this draft\npatch, enable_timeout_every(). When I say I want my handler to be\nfired in 3 s, I don't mean that I want it to be fired when the system\ntime is 3 seconds greater than it is right now. I mean I want it to be\nfired in 3 actual seconds, regardless of what dumb thing the system\nclock may choose to do. I don't really think that precise behavior can\nbe implemented, but ideally if a timeout that was supposed to happen\nafter 3 s is now scheduled for a time that is more than 3 seconds\nbeyond the current value of the system clock, we'd move the firing\ntime backwards to 3 seconds beyond the current system clock. That way,\nif you set the system time backward by 4 minutes, you might see a few\nseconds of delay before the next firing, but you wouldn't go into the\ntank for 4 minutes.\n\nI don't see an obvious way of making timeout.c behave like that\nwithout major surgery, though. If nobody else does either, then we\ncould (1) stick with something closer to Nitin's current patch, which\ntries to handle this concern outside of timeout.c, (2) adopt something\nlike the attached 0001 and leave the question of improved behavior in\ncase of backwards system clock adjustments for another day, or (3)\nundertake to rewrite timeout.c as a precondition of being able to log\nmessages about why startup is slow. I'm not a huge fan of (3) but I'm\nopen to suggestions.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 30 Sep 2021 14:41:42 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> ... When I say I want my handler to be\n> fired in 3 s, I don't mean that I want it to be fired when the system\n> time is 3 seconds greater than it is right now. I mean I want it to be\n> fired in 3 actual seconds, regardless of what dumb thing the system\n> clock may choose to do.\n\nThat would be lovely, certainly. But aren't you moving the goalposts\nrather far? I don't think we make any promises about such things\ntoday, so why has the issue suddenly gotten more pressing? In particular,\nwhy do you think Nitin's patch is proof against this? Seems to me it's\nprobably got *more* failure cases, not fewer, if the system clock is\nacting funny.\n\nBTW, one could imagine addressing this concern by having timeout.c work\nwith CLOCK_MONOTONIC instead of the regular wall clock. But I fear\nwe'd have to drop enable_timeout_at(), for lack of ability to translate\nbetween CLOCK_MONOTONIC timestamps and those used by anybody else.\nAlso get_timeout_start_time/get_timeout_finish_time would become\nproblematic. Maybe we only really care about deltas, so the more\nrestrictive API would be workable, but it seems like a nontrivial\namount of work.\n\nOn the whole, in these days of NTP, I'm not sure I care to spend\nlarge amounts of effort on dealing with a bogus system clock.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 30 Sep 2021 15:10:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Thu, Sep 30, 2021 at 3:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> That would be lovely, certainly. But aren't you moving the goalposts\n> rather far? I don't think we make any promises about such things\n> today, so why has the issue suddenly gotten more pressing?\n\nYeah, perhaps it's best to not to worry about it. I dislike failure to\nworry about that case on general principle, but I agree with you that\nit seems to be moving the goalposts a disproportionate distance.\n\n> In particular,\n> why do you think Nitin's patch is proof against this? Seems to me it's\n> probably got *more* failure cases, not fewer, if the system clock is\n> acting funny.\n\nYou might be right. I sort of assumed that timeout.c had some defense\nagainst this, but since that seems not to be the case, I suppose no\nfacility that depends on it can hope to stay out of trouble either.\n\n> On the whole, in these days of NTP, I'm not sure I care to spend\n> large amounts of effort on dealing with a bogus system clock.\n\nIt's certainly less of an issue than it used to be back in my day.\n\nAny thoughts on the patch I attached?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 30 Sep 2021 17:08:17 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Thu, Sep 30, 2021 at 05:08:17PM -0400, Robert Haas wrote:\n> It's certainly less of an issue than it used to be back in my day.\n> \n> Any thoughts on the patch I attached?\n\nI don't know. Anyway, this is actively worked on, so I have taken the\nliberty to move that to the next CF.\n--\nMichael", "msg_date": "Fri, 1 Oct 2021 15:22:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Wed, Sep 29, 2021 at 11:10 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Sep 29, 2021 at 10:08 AM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> > Sorry. There was a misunderstanding about this and for the patch\n> > shared on September 27th, I had tested for the value '0' and observed\n> > that no progress messages were getting logged, probably the time at\n> > which 'enable_timeout_at' is getting called is past the 'next_timeout'\n> > value. This behaviour is completely dependent on the system. Now added\n> > an extra condition to disable the feature in case of '0' setting.\n>\n> Oh, interesting. I failed to consider that the behavior might vary\n> from one system to another.\n>\n> I just noticed something else which I realize is the indirect result\n> of my own suggestion but which doesn't actually look all that nice.\n> You've now got a call to RegisterTimeout(STARTUP_PROGRESS_TIMEOUT,\n> ...) in InitPostgres, guarded by ! IsPostmasterEnvironment, and then\n> another one in StartupProcessMain(). I think that's so that the\n> feature works in single-user mode, but that means that technically,\n> we're not reporting on the progress of the startup process. We're\n> reporting progress on the startup operations that are normally\n> performed by the startup process. But that means that the\n> documentation isn't quite accurate (because it mentions the startup\n> process specifically) and that the placement of the code in startup.c\n> is suspect (because that's specifically for the startup process) and\n> that basically every instance of the word \"process\" in the patch is\n> technically a little bit wrong. I'm not sure if that's a big enough\n> problem to be worth worrying about or exactly what we ought to do\n> about it, but it doesn't seem fantastic. At a minimum, I think we\n> should probably try to eliminate as many references to the startup\n> process as we can, and talk about startup progress or startup\n> operations or something like that.\n>\n> + * Start timestamp of the operation that occur during startup process.\n>\n> This is not correct grammar - it would need to be \"operations that\n> occur\" or \"operation that occurs\". But neither seems particularly\n> clear about what the variable actually does. How about \"Time at which\n> the most recent startup operation started\"?\n>\n> + * Indicates the timestamp at which the timer was supposed to expire.\n>\n> \"Indicates\" can be deleted, but also I think it would be better to\n> state the purpose of the timer i.e. \"Timestamp at which the next\n> startup progress message should be logged.\"\n>\n> + enable_timeout_at(STARTUP_PROGRESS_TIMEOUT, next_timeout);\n> + scheduled_startup_progress_timeout = next_timeout;\n> + startup_progress_timer_expired = false;\n>\n> I think you should set startup_progress_timer_expired to false before\n> calling enable_timeout_at. Otherwise there's a race condition. It's\n> unlikely that the timer could expire and the interrupt handler fire\n> before we reach startup_progress_timer_expired = false, but it seems\n> like there's no reason to take a chance.\n>\n> + * Calculates the timestamp at which the next timer should expire and enables\n>\n> So in some places you have verbs with an \"s\" on the end, like here,\n> and in other places without, like in the next example. In \"telegraph\n> style\" comments like this, this implicit subject is \"it\" or \"this,\"\n> but you don't write that. However you write the rest of the sentence\n> as if it were there. So this should say \"calculate\" and \"enable\"\n> rather than \"calculates\" and \"enables\".\n>\n> + * Schedule a wakeup call for logging the progress of startup process.\n>\n> This isn't really an accurate description, I think. It's not\n> scheduling anything, and I don't know what a \"wakeup call\" is anyway.\n> \"Set a flag indicating that it's time to log a progress report\" might\n> be better.\n>\n> + * Sets the start timestamp of the current operation and also enables the\n>\n> Set. enable.\n>\n> + * timeout for logging the progress of startup process.\n>\n> I think you could delete \"for logging the progress of startup process\"\n> here; that seems clear enough, and this reads a bit awkwardly. If you\n> want to keep something like this I wrote write \"...enable the timeout\n> so that the progress of the startup progress will be logged.\"\n>\n> + * the timer if it did, otheriwse return false.\n>\n> otherwise\n>\n> + * Begin the startup progress phase to report the progress of syncing\n> + * data directory (syncfs).\n>\n> All of the comments that start with \"Begin the startup progress phase\"\n> should instead start with \"Begin startup progress phase\".\n>\n> I think this could be condensed down to \"Prepare to report progress\n> syncing the data directory via syncfs\", and likewise \"Prepare to\n> report progress of the pre-fsync phase\", \"Prepare to report progress\n> resetting unlogged relations,\" etc.\n>\n> + gettext_noop(\"Sets the time interval between each progress update \"\n> + \"of the startup process.\"),\n>\n> This is actually inaccurate. It's sort of the same problem I was\n> worried about with respect to the documentation: it's NOT the interval\n> between progress updates, because it applies separately to each\n> operation. We need to say something that makes that clear, or at least\n> that doesn't get overtly the opposite impression. It's hard to do that\n> briefly, but maybe something like \"Time between progress updates for\n> long-running startup operations\"?\n>\n> Whatever we use here could also be the comment for\n> log_startup_progress_interval.\n>\n> + * Logs the startup progress message if the timer has expired.\n>\n> the -> a\n>\n\nIn addition I have little concern about ereport_startup_progress() use:\n\n+ if (!StandbyMode)\n+ ereport_startup_progress(\"redo in progress,\nelapsed time: %ld.%02d s, current LSN: %X/%X\",\n+ LSN_FORMAT_ARGS(ReadRecPtr));\n\nThe format string, its input, input count, and input position\nis not properly aligned. The input for \"elapsed time: %ld.%02d s\" is\ngetting value implicitly inside ereport_startup_progress() which\ncould be confusing, usually, the input count should be the same as\nexpected in the format string in the same sequence that is expected to\nappear in the log message.\n\nI think the \"elapsed time\" part can be implicitly added to the error\nmessage inside ereport_startup_progress() which is common to all\ncalls.\n\nRegards,\nAmul\n\n\n", "msg_date": "Wed, 13 Oct 2021 18:35:33 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Wed, Oct 13, 2021 at 9:06 AM Amul Sul <sulamul@gmail.com> wrote:\n> I think the \"elapsed time\" part can be implicitly added to the error\n> message inside ereport_startup_progress() which is common to all\n> calls.\n\nNot if it means having to call psprintf there!\n\nIf there's some way we could do it with macro tricks, it might be\nworth considering, but I'm not sure there is, or that it would be less\nconfusing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 Oct 2021 09:27:12 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Thu, Sep 30, 2021 at 5:08 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Any thoughts on the patch I attached?\n\nApparently not, but here's a v2 anyway. In this version I made\nenable_timeout_every() a three-argument function, so that the caller\ncan specify both the first time at which the timeout routine should be\ncalled and the interval between them, instead of only the latter. That\nseems to be more convenient for this use case, and is more powerful in\ngeneral.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 18 Oct 2021 11:45:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "> Apparently not, but here's a v2 anyway. In this version I made\n> enable_timeout_every() a three-argument function, so that the caller\n> can specify both the first time at which the timeout routine should be\n> called and the interval between them, instead of only the latter. That\n> seems to be more convenient for this use case, and is more powerful in\n> general.\n\nThanks for sharing the patch. Overall approach looks good to me. But\njust one concern about using enable_timeout_every() functionality. As\nper my understanding the caller should calculate the first scheduled\ntimeout (now + interval) and pass it as the second argument but this\nis not the same in 'v2-0002-Quick-testing-hack.patch'. Anyways I have\ndone the changes as I have mentioned (like now + interval). Kindly\ncorrect me if I am wrong. I am attaching 2 patches here.\n'v19-0001-Add-enable_timeout_every-to-fire-the-same-timeout.patch' is\nthe same as Robert's v2 patch. I have rebased my patch on top of this\nand it is 'v19-0002-startup-progress.patch'.\n\n> I just noticed something else which I realize is the indirect result\n> of my own suggestion but which doesn't actually look all that nice.\n> You've now got a call to RegisterTimeout(STARTUP_PROGRESS_TIMEOUT,\n> ...) in InitPostgres, guarded by ! IsPostmasterEnvironment, and then\n> another one in StartupProcessMain(). I think that's so that the\n> feature works in single-user mode, but that means that technically,\n> we're not reporting on the progress of the startup process. We're\n> reporting progress on the startup operations that are normally\n> performed by the startup process. But that means that the\n> documentation isn't quite accurate (because it mentions the startup\n> process specifically) and that the placement of the code in startup.c\n> is suspect (because that's specifically for the startup process) and\n> that basically every instance of the word \"process\" in the patch is\n> technically a little bit wrong. I'm not sure if that's a big enough\n> problem to be worth worrying about or exactly what we ought to do\n> about it, but it doesn't seem fantastic. At a minimum, I think we\n> should probably try to eliminate as many references to the startup\n> process as we can, and talk about startup progress or startup\n> operations or something like that.\n\nYeah right. I have modified the comments accordingly and also fixed\nthe other review comments related to the code comments.\n\nI have modified the code to include a call to RegisterTimeout() in\nonly one place than the two calls previously. Initially I thought to\ncall this in begin_startup_progress_phase(). I feel this is not a\nbetter choice since begin_startup_progress_phase function is getting\ncalled in many places. So it calls RegisterTimeout() many times which\nis not required. I feel StartupXLOG() is a better place for this since\nStartupXLOG() gets called during the startup process, bootstrap\nprocess or standalone backend. As per earlier discussion we need\nsupport for this in the case of startup process and standalone\nbackend. Hence guarded this with '!IsBootstrapProcessingMode()'. Also\nverified the behaviour in both of the cases. Please correct me if I am\nwrong.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Mon, Oct 18, 2021 at 9:15 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Sep 30, 2021 at 5:08 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Any thoughts on the patch I attached?\n>\n> Apparently not, but here's a v2 anyway. In this version I made\n> enable_timeout_every() a three-argument function, so that the caller\n> can specify both the first time at which the timeout routine should be\n> called and the interval between them, instead of only the latter. That\n> seems to be more convenient for this use case, and is more powerful in\n> general.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com", "msg_date": "Tue, 19 Oct 2021 18:37:53 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Tue, Oct 19, 2021 at 9:06 AM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> Thanks for sharing the patch. Overall approach looks good to me. But\n> just one concern about using enable_timeout_every() functionality. As\n> per my understanding the caller should calculate the first scheduled\n> timeout (now + interval) and pass it as the second argument but this\n> is not the same in 'v2-0002-Quick-testing-hack.patch'. Anyways I have\n> done the changes as I have mentioned (like now + interval). Kindly\n> correct me if I am wrong. I am attaching 2 patches here.\n> 'v19-0001-Add-enable_timeout_every-to-fire-the-same-timeout.patch' is\n> the same as Robert's v2 patch. I have rebased my patch on top of this\n> and it is 'v19-0002-startup-progress.patch'.\n\nThis version looks fine, so I have committed it (and my\nenable_timeout_every patch also, as a necessary prerequisite).\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Oct 2021 11:56:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Mon, Oct 25, 2021 at 9:26 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Oct 19, 2021 at 9:06 AM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> > Thanks for sharing the patch. Overall approach looks good to me. But\n> > just one concern about using enable_timeout_every() functionality. As\n> > per my understanding the caller should calculate the first scheduled\n> > timeout (now + interval) and pass it as the second argument but this\n> > is not the same in 'v2-0002-Quick-testing-hack.patch'. Anyways I have\n> > done the changes as I have mentioned (like now + interval). Kindly\n> > correct me if I am wrong. I am attaching 2 patches here.\n> > 'v19-0001-Add-enable_timeout_every-to-fire-the-same-timeout.patch' is\n> > the same as Robert's v2 patch. I have rebased my patch on top of this\n> > and it is 'v19-0002-startup-progress.patch'.\n>\n> This version looks fine, so I have committed it (and my\n> enable_timeout_every patch also, as a necessary prerequisite).\n\nThanks for getting this in.\n\nI have few more thoughts:\n\nCan we also log the total time the startup process took to recover,\nand also the total time each stage of the recovery/redo processing\ntook: 1) into a file or 2) emitting that info via a new hook 3) into a\nsystem catalog table (assuming at the end of the recovery the database\nis in a consistent state, but I'm not sure if we ever update any\ncatalog tables in/after the startup/recovery phase).\n\nThis will help the users/admins/developers for summarizing, analytical\nand debugging purposes. This information can easily help us to\nunderstand the recovery patterns.\n\nThoughts?\n\nIf okay, I can spend some more time and start a separate thread to discuss.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 26 Oct 2021 13:49:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Tue, Oct 26, 2021 at 4:19 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Can we also log the total time the startup process took to recover,\n> and also the total time each stage of the recovery/redo processing\n> took: 1) into a file or 2) emitting that info via a new hook 3) into a\n> system catalog table (assuming at the end of the recovery the database\n> is in a consistent state, but I'm not sure if we ever update any\n> catalog tables in/after the startup/recovery phase).\n\n#3 would be hard to do because there could be any number of databases\nand it is unclear which one we ought to update. Also, we'd have to\nlaunch a background worker to connect to that database in order to do\nit, and be prepared for what happens if that worker fails to get the\nwork done for whatever reason. Also, it is unclear why we should only\nlog this specific thing to a system catalog and not anything else.\n\n#1 and #2 could certainly be done, but I'm not sure what the use case\nis, and I think it's a separate proposal from what we did here. So I\nthink a new thread would be appropriate if you want to make a new\nproposal.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 26 Oct 2021 10:06:40 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Mon, Oct 25, 2021 at 11:56 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> This version looks fine, so I have committed it (and my\n> enable_timeout_every patch also, as a necessary prerequisite).\n\nI was fooling around with a test setup today, working on an unrelated\nproblem, and this happened:\n\n2021-10-28 14:21:23.145 EDT [92010] LOG: resetting unlogged relations\n(init), elapsed time: 0.00 s, current path: base/13020\n\nThat's not supposed to happen. I assume the problem is that the\ntimeout for the previous phase fired just as we were beginning a new\none, and the code got confused. I think we probably need to do\nsomething like this to make sure that the timeout from one operation\ncan't trigger a log message for the next:\n\ndiff --git a/src/backend/postmaster/startup.c b/src/backend/postmaster/startup.c\nindex 28e68dd871..47ec737888 100644\n--- a/src/backend/postmaster/startup.c\n+++ b/src/backend/postmaster/startup.c\n@@ -320,6 +320,8 @@ begin_startup_progress_phase(void)\n if (log_startup_progress_interval == 0)\n return;\n\n+ disable_timeout(STARTUP_PROGRESS_TIMEOUT, false);\n+ startup_progress_timer_expired = false;\n startup_progress_phase_start_time = GetCurrentTimestamp();\n fin_time = TimestampTzPlusMilliseconds(startup_progress_phase_start_time,\n log_startup_progress_interval);\n\nThoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 28 Oct 2021 14:29:21 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "> I was fooling around with a test setup today, working on an unrelated\n> problem, and this happened:\n>\n> 2021-10-28 14:21:23.145 EDT [92010] LOG: resetting unlogged relations\n> (init), elapsed time: 0.00 s, current path: base/13020\n\nNice catch and interesting case.\n\n> That's not supposed to happen. I assume the problem is that the\n> timeout for the previous phase fired just as we were beginning a new\n> one, and the code got confused. I think we probably need to do\n> something like this to make sure that the timeout from one operation\n> can't trigger a log message for the next:\n>\n> diff --git a/src/backend/postmaster/startup.c b/src/backend/postmaster/startup.c\n> index 28e68dd871..47ec737888 100644\n> --- a/src/backend/postmaster/startup.c\n> +++ b/src/backend/postmaster/startup.c\n> @@ -320,6 +320,8 @@ begin_startup_progress_phase(void)\n> if (log_startup_progress_interval == 0)\n> return;\n>\n> + disable_timeout(STARTUP_PROGRESS_TIMEOUT, false);\n> + startup_progress_timer_expired = false;\n> startup_progress_phase_start_time = GetCurrentTimestamp();\n> fin_time = TimestampTzPlusMilliseconds(startup_progress_phase_start_time,\n> log_startup_progress_interval);\n>\n> Thoughts?\n\nYes. I agree that this is not an expected behaviour and I do agree\nthat, probably the timeout for the previous phase fired just as we\nwere beginning a new one. For each operation, we call\nbegin_startup_progress_phase() before starting the operation and\none/multiple calls to ereport_startup_progress() to report the\nprogress and intentionally we have not added any functionality to\ndisable the timer at the end of the operation. The timer remains\nactive and there may be multiple calls to\nstartup_progress_timeout_handler() which sets a flag to true. So\nwhenever we call a begin_startup_progress_phase() for the next\noperation, we do enable the timer (In my understanding it will\ninternally disable the old timer and schedule a new one) but the flag\nis already set to true by the previous timer. Hence the next call to\nereport_startup_progress() logs a message. So I feel just setting\n'startup_progress_timer_expired' to false in\nbegin_startup_progress_phase() would work. Please correct me if I am\nwrong.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Thu, Oct 28, 2021 at 11:59 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Oct 25, 2021 at 11:56 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > This version looks fine, so I have committed it (and my\n> > enable_timeout_every patch also, as a necessary prerequisite).\n>\n> I was fooling around with a test setup today, working on an unrelated\n> problem, and this happened:\n>\n> 2021-10-28 14:21:23.145 EDT [92010] LOG: resetting unlogged relations\n> (init), elapsed time: 0.00 s, current path: base/13020\n>\n> That's not supposed to happen. I assume the problem is that the\n> timeout for the previous phase fired just as we were beginning a new\n> one, and the code got confused. I think we probably need to do\n> something like this to make sure that the timeout from one operation\n> can't trigger a log message for the next:\n>\n> diff --git a/src/backend/postmaster/startup.c b/src/backend/postmaster/startup.c\n> index 28e68dd871..47ec737888 100644\n> --- a/src/backend/postmaster/startup.c\n> +++ b/src/backend/postmaster/startup.c\n> @@ -320,6 +320,8 @@ begin_startup_progress_phase(void)\n> if (log_startup_progress_interval == 0)\n> return;\n>\n> + disable_timeout(STARTUP_PROGRESS_TIMEOUT, false);\n> + startup_progress_timer_expired = false;\n> startup_progress_phase_start_time = GetCurrentTimestamp();\n> fin_time = TimestampTzPlusMilliseconds(startup_progress_phase_start_time,\n> log_startup_progress_interval);\n>\n> Thoughts?\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Oct 2021 17:08:52 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Fri, Oct 29, 2021 at 7:37 AM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> ereport_startup_progress() logs a message. So I feel just setting\n> 'startup_progress_timer_expired' to false in\n> begin_startup_progress_phase() would work. Please correct me if I am\n> wrong.\n\nI think you're wrong. If we did that, the previous timer could fire\nright after we set startup_progress_timer_expired = false, and before\nwe reschedule the timeout. It seems annoying to have to disable the\ntimeout and immediately turn around and re-enable it, but I don't see\nhow to avoid the race condition otherwise.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Oct 2021 08:40:06 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "> I think you're wrong. If we did that, the previous timer could fire\n> right after we set startup_progress_timer_expired = false, and before\n> we reschedule the timeout. It seems annoying to have to disable the\n> timeout and immediately turn around and re-enable it, but I don't see\n> how to avoid the race condition otherwise.\n\nRight. There is a possibility of race conditions. In that case the\nabove changes look good to me.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Fri, Oct 29, 2021 at 6:10 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Oct 29, 2021 at 7:37 AM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> > ereport_startup_progress() logs a message. So I feel just setting\n> > 'startup_progress_timer_expired' to false in\n> > begin_startup_progress_phase() would work. Please correct me if I am\n> > wrong.\n>\n> I think you're wrong. If we did that, the previous timer could fire\n> right after we set startup_progress_timer_expired = false, and before\n> we reschedule the timeout. It seems annoying to have to disable the\n> timeout and immediately turn around and re-enable it, but I don't see\n> how to avoid the race condition otherwise.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n>\n\n\n", "msg_date": "Fri, 29 Oct 2021 18:41:29 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Fri, Oct 29, 2021 at 9:10 AM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> > I think you're wrong. If we did that, the previous timer could fire\n> > right after we set startup_progress_timer_expired = false, and before\n> > we reschedule the timeout. It seems annoying to have to disable the\n> > timeout and immediately turn around and re-enable it, but I don't see\n> > how to avoid the race condition otherwise.\n>\n> Right. There is a possibility of race conditions. In that case the\n> above changes look good to me.\n\nCommitted.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Oct 2021 14:44:38 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Sat, Oct 30, 2021 at 7:44 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Committed.\n\nIs it expected that an otherwise idle standby's recovery process\nreceives SIGALRM every N seconds, or should the timer be canceled at\nthat point, as there is no further progress to report?\n\n\n", "msg_date": "Wed, 9 Nov 2022 00:04:33 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Tue, Nov 8, 2022 at 4:35 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Sat, Oct 30, 2021 at 7:44 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Committed.\n>\n> Is it expected that an otherwise idle standby's recovery process\n> receives SIGALRM every N seconds, or should the timer be canceled at\n> that point, as there is no further progress to report?\n\nNice catch. Yeah, that seems unnecessary, see the below standby logs.\nI think we need to disable_timeout(STARTUP_PROGRESS_TIMEOUT, false);,\nsomething like the attached? I think there'll be no issue with the\npatch since the StandbyMode gets reset only at the end of recovery (in\nFinishWalRecovery()) but it can very well be set during recovery (in\nReadRecord()). Note that I also added an assertion in\nhas_startup_progress_timeout_expired(), just in case.\n\n2022-11-08 11:28:23.563 UTC [980909] LOG: SIGALRM handle_sig_alarm received\n2022-11-08 11:28:23.563 UTC [980909] LOG:\nstartup_progress_timeout_handler called\n2022-11-08 11:28:33.563 UTC [980909] LOG: SIGALRM handle_sig_alarm received\n2022-11-08 11:28:33.563 UTC [980909] LOG:\nstartup_progress_timeout_handler called\n2022-11-08 11:28:43.563 UTC [980909] LOG: SIGALRM handle_sig_alarm received\n2022-11-08 11:28:43.563 UTC [980909] LOG:\nstartup_progress_timeout_handler called\n2022-11-08 11:28:53.563 UTC [980909] LOG: SIGALRM handle_sig_alarm received\n2022-11-08 11:28:53.563 UTC [980909] LOG:\nstartup_progress_timeout_handler called\n\nWhilte at it, I noticed that we report redo progress for PITR, but we\ndon't report when standby enters archive recovery mode, say due to a\nfailure in the connection to primary or after the promote signal is\nfound. Isn't it useful to report in this case as well to know the\nrecovery progress?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 8 Nov 2022 18:02:42 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Tue, 8 Nov 2022 at 12:33, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Nov 8, 2022 at 4:35 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> >\n> > On Sat, Oct 30, 2021 at 7:44 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > Committed.\n> >\n> > Is it expected that an otherwise idle standby's recovery process\n> > receives SIGALRM every N seconds, or should the timer be canceled at\n> > that point, as there is no further progress to report?\n>\n> Nice catch. Yeah, that seems unnecessary, see the below standby logs.\n> I think we need to disable_timeout(STARTUP_PROGRESS_TIMEOUT, false);,\n> something like the attached? I think there'll be no issue with the\n> patch since the StandbyMode gets reset only at the end of recovery (in\n> FinishWalRecovery()) but it can very well be set during recovery (in\n> ReadRecord()). Note that I also added an assertion in\n> has_startup_progress_timeout_expired(), just in case.\n>\n> 2022-11-08 11:28:23.563 UTC [980909] LOG: SIGALRM handle_sig_alarm received\n> 2022-11-08 11:28:23.563 UTC [980909] LOG:\n> startup_progress_timeout_handler called\n> 2022-11-08 11:28:33.563 UTC [980909] LOG: SIGALRM handle_sig_alarm received\n> 2022-11-08 11:28:33.563 UTC [980909] LOG:\n> startup_progress_timeout_handler called\n> 2022-11-08 11:28:43.563 UTC [980909] LOG: SIGALRM handle_sig_alarm received\n> 2022-11-08 11:28:43.563 UTC [980909] LOG:\n> startup_progress_timeout_handler called\n> 2022-11-08 11:28:53.563 UTC [980909] LOG: SIGALRM handle_sig_alarm received\n> 2022-11-08 11:28:53.563 UTC [980909] LOG:\n> startup_progress_timeout_handler called\n>\n> Whilte at it, I noticed that we report redo progress for PITR, but we\n> don't report when standby enters archive recovery mode, say due to a\n> failure in the connection to primary or after the promote signal is\n> found. Isn't it useful to report in this case as well to know the\n> recovery progress?\n\nI think your patch disables progress too early, effectively turning\noff the standby progress feature. The purpose was to report on things\nthat take long periods during recovery, not just prior to recovery.\n\nI would advocate that we disable progress only while waiting, as I've done here:\nhttps://www.postgresql.org/message-id/CANbhV-GcWjZ2cmj0uCbZDWQUHnneMi_4EfY3dVWq0-yD5o7Ccg%40mail.gmail.com\n\n--\nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 14 Nov 2022 12:37:27 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Mon, Nov 14, 2022 at 7:37 AM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n> > Whilte at it, I noticed that we report redo progress for PITR, but we\n> > don't report when standby enters archive recovery mode, say due to a\n> > failure in the connection to primary or after the promote signal is\n> > found. Isn't it useful to report in this case as well to know the\n> > recovery progress?\n>\n> I think your patch disables progress too early, effectively turning\n> off the standby progress feature. The purpose was to report on things\n> that take long periods during recovery, not just prior to recovery.\n>\n> I would advocate that we disable progress only while waiting, as I've done here:\n> https://www.postgresql.org/message-id/CANbhV-GcWjZ2cmj0uCbZDWQUHnneMi_4EfY3dVWq0-yD5o7Ccg%40mail.gmail.com\n\nMaybe I'm confused here, but I think that, on a standby, startup\nprogress messages are only printed until the main redo loop is\nreached. Otherwise, we would print a message on a standby every 10s\nforever, which seems like a thing that most users would not like. So I\nthink that Bharath has the right idea here.\n\nI don't think that his patch is right in detail, though. I don't think\nthe call to disable_timeout() needs to be conditional, and I don't\nthink the Assert is correct. Also, I think that your patch has the\nright idea in encapsulating the disable_timeout() call inside a new\nfunction disable_startup_progress_timeout(), rather than having the\ndetails known directly by xlogrecovery.c.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Nov 2022 11:01:12 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Mon, Nov 14, 2022 at 9:31 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Nov 14, 2022 at 7:37 AM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> > > Whilte at it, I noticed that we report redo progress for PITR, but we\n> > > don't report when standby enters archive recovery mode, say due to a\n> > > failure in the connection to primary or after the promote signal is\n> > > found. Isn't it useful to report in this case as well to know the\n> > > recovery progress?\n> >\n> > I think your patch disables progress too early, effectively turning\n> > off the standby progress feature. The purpose was to report on things\n> > that take long periods during recovery, not just prior to recovery.\n> >\n> > I would advocate that we disable progress only while waiting, as I've done here:\n> > https://www.postgresql.org/message-id/CANbhV-GcWjZ2cmj0uCbZDWQUHnneMi_4EfY3dVWq0-yD5o7Ccg%40mail.gmail.com\n>\n> Maybe I'm confused here, but I think that, on a standby, startup\n> progress messages are only printed until the main redo loop is\n> reached. Otherwise, we would print a message on a standby every 10s\n> forever, which seems like a thing that most users would not like. So I\n> think that Bharath has the right idea here.\n\nYes, the idea is to disable the timeout on standby completely since we\nactually don't report any recovery progress. Keeping it enabled,\nunnecessarily calls startup_progress_timeout_handler() every\nlog_startup_progress_interval seconds i.e. 10 seconds. That's the\nintention of the patch.\n\n> I don't think that his patch is right in detail, though. I don't think\n> the call to disable_timeout() needs to be conditional,\n\nYes, disable_timeout() returns if the timeout was previously disabled\ni.e. all_timeouts[STARTUP_PROGRESS_TIMEOUT].active is false. I changed\nit in the v2 patch.\n\n> and I don't\n> think the Assert is correct.\n\nYou're right. My intention there was to check if the timeout is\nenabled while ereport_startup_progress() is called. In the v2 patch,\nwhen we actually disable the timeout startup_progress_timer_expired\ngets set to false and has_startup_progress_timeout_expired() just\nreturns in such a case.\n\n> Also, I think that your patch has the\n> right idea in encapsulating the disable_timeout() call inside a new\n> function disable_startup_progress_timeout(), rather than having the\n> details known directly by xlogrecovery.c.\n\nYes, I too like Simon's idea of {enable,\ndisable}_startup_progress_timeout functions, I utilized them in the v2\npatch here.\n\nI actually want to get rid of begin_startup_progress_phase() which now\nbecomes a thin wrapper calling disable and enable functions and ensure\nthe callers do follow enable()-report_progress()-disable() way to use\nthe feature, however I didn't code for that as it needs changes across\nmany files. If okay, I can code for that too. Thoughts?\n\nPlease review the v2 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 15 Nov 2022 19:03:03 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Tue, 15 Nov 2022 at 13:33, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Nov 14, 2022 at 9:31 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Mon, Nov 14, 2022 at 7:37 AM Simon Riggs\n> > <simon.riggs@enterprisedb.com> wrote:\n> > > > Whilte at it, I noticed that we report redo progress for PITR, but we\n> > > > don't report when standby enters archive recovery mode, say due to a\n> > > > failure in the connection to primary or after the promote signal is\n> > > > found. Isn't it useful to report in this case as well to know the\n> > > > recovery progress?\n> > >\n> > > I think your patch disables progress too early, effectively turning\n> > > off the standby progress feature. The purpose was to report on things\n> > > that take long periods during recovery, not just prior to recovery.\n> > >\n> > > I would advocate that we disable progress only while waiting, as I've done here:\n> > > https://www.postgresql.org/message-id/CANbhV-GcWjZ2cmj0uCbZDWQUHnneMi_4EfY3dVWq0-yD5o7Ccg%40mail.gmail.com\n> >\n> > Maybe I'm confused here, but I think that, on a standby, startup\n> > progress messages are only printed until the main redo loop is\n> > reached. Otherwise, we would print a message on a standby every 10s\n> > forever, which seems like a thing that most users would not like. So I\n> > think that Bharath has the right idea here.\n>\n> Yes, the idea is to disable the timeout on standby completely since we\n> actually don't report any recovery progress. Keeping it enabled,\n> unnecessarily calls startup_progress_timeout_handler() every\n> log_startup_progress_interval seconds i.e. 10 seconds. That's the\n> intention of the patch.\n\nAs long as we don't get the SIGALRMs that Thomas identified, then I'm happy.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 15 Nov 2022 14:27:55 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Tue, Nov 15, 2022 at 8:33 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Please review the v2 patch.\n\nIt seems to me that this will call disable_startup_progress_timeout\nonce per WAL record, which seems like an unnecessary expense. How\nabout leaving the code inside the loop just as we have it, and putting\nif (StandbyMode) disable_startup_progress_timeout() before entering\nthe loop?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Nov 2022 12:24:46 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Tue, Nov 15, 2022 at 10:55 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Nov 15, 2022 at 8:33 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Please review the v2 patch.\n>\n> It seems to me that this will call disable_startup_progress_timeout\n> once per WAL record, which seems like an unnecessary expense. How\n> about leaving the code inside the loop just as we have it, and putting\n> if (StandbyMode) disable_startup_progress_timeout() before entering\n> the loop?\n\nThat can be done, only if we can disable the timeout in another place\nwhen the StandbyMode is set to true in ReadRecord(), that is, after\nthe standby server finishes crash recovery and enters standby mode.\n\nI'm attaching the v3 patch for further review. Please find the CF\nentry here - https://commitfest.postgresql.org/41/4012/.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 16 Nov 2022 12:17:13 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Wed, 16 Nov 2022 at 06:47, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Nov 15, 2022 at 10:55 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Tue, Nov 15, 2022 at 8:33 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > Please review the v2 patch.\n> >\n> > It seems to me that this will call disable_startup_progress_timeout\n> > once per WAL record, which seems like an unnecessary expense. How\n> > about leaving the code inside the loop just as we have it, and putting\n> > if (StandbyMode) disable_startup_progress_timeout() before entering\n> > the loop?\n>\n> That can be done, only if we can disable the timeout in another place\n> when the StandbyMode is set to true in ReadRecord(), that is, after\n> the standby server finishes crash recovery and enters standby mode.\n>\n> I'm attaching the v3 patch for further review. Please find the CF\n> entry here - https://commitfest.postgresql.org/41/4012/.\n\nbegin_startup_progress_phase() checks to see if feature is disabled\ntwice, so I think you can skip the check and just rely on the check in\nenable().\n\nOtherwise, all good.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 16 Nov 2022 08:58:01 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Wed, Nov 16, 2022 at 2:28 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n>\n> On Wed, 16 Nov 2022 at 06:47, Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Tue, Nov 15, 2022 at 10:55 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > >\n> > > On Tue, Nov 15, 2022 at 8:33 AM Bharath Rupireddy\n> > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > Please review the v2 patch.\n> > >\n> > > It seems to me that this will call disable_startup_progress_timeout\n> > > once per WAL record, which seems like an unnecessary expense. How\n> > > about leaving the code inside the loop just as we have it, and putting\n> > > if (StandbyMode) disable_startup_progress_timeout() before entering\n> > > the loop?\n> >\n> > That can be done, only if we can disable the timeout in another place\n> > when the StandbyMode is set to true in ReadRecord(), that is, after\n> > the standby server finishes crash recovery and enters standby mode.\n> >\n> > I'm attaching the v3 patch for further review. Please find the CF\n> > entry here - https://commitfest.postgresql.org/41/4012/.\n>\n> begin_startup_progress_phase() checks to see if feature is disabled\n> twice, so I think you can skip the check and just rely on the check in\n> enable().\n\nYes, I did that intentionally to avoid begin_startup_progress_phase()\ncalling disable and enable functions when the feature is disabled.\nI'll leave it to the committer whether to retain it or delete it.\n\n> Otherwise, all good.\n\nThanks.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 16 Nov 2022 15:05:24 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Wed, Nov 16, 2022 at 1:47 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> That can be done, only if we can disable the timeout in another place\n> when the StandbyMode is set to true in ReadRecord(), that is, after\n> the standby server finishes crash recovery and enters standby mode.\n\nOh, interesting. I didn't realize that we would need to worry about that case.\n\n> I'm attaching the v3 patch for further review. Please find the CF\n> entry here - https://commitfest.postgresql.org/41/4012/.\n\nI kind of dislike having to have logic for this in two places. Seems\nlike it could create future bugs.\n\nHow about the attached approach, instead? This way, the first time the\ntimer expires after we reach standby mode, we reactively disable it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 16 Nov 2022 13:51:06 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Thu, Nov 17, 2022 at 7:51 AM Robert Haas <robertmhaas@gmail.com> wrote:\n+ * up, since standby mode is a state that is intendeded to persist\n\ntypo\n\nOtherwise LGTM.\n\n\n", "msg_date": "Thu, 17 Nov 2022 13:53:17 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Thu, Nov 17, 2022 at 12:21 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Nov 16, 2022 at 1:47 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > That can be done, only if we can disable the timeout in another place\n> > when the StandbyMode is set to true in ReadRecord(), that is, after\n> > the standby server finishes crash recovery and enters standby mode.\n>\n> Oh, interesting. I didn't realize that we would need to worry about that case.\n>\n> > I'm attaching the v3 patch for further review. Please find the CF\n> > entry here - https://commitfest.postgresql.org/41/4012/.\n>\n> I kind of dislike having to have logic for this in two places. Seems\n> like it could create future bugs.\n\nDuplication is a problem that I agree with and I have an idea here -\nhow about introducing a new function, say EnableStandbyMode() that\nsets StandbyMode to true and disables the startup progress timeout,\nsomething like the attached?\n\n> How about the attached approach, instead? This way, the first time the\n> timer expires after we reach standby mode, we reactively disable it.\n\nHm. I'm not really sure if it's a good idea. While it simplifies the\ncode, the has_startup_progress_timeout_expired() gets called for every\nWAL record in standby mode. Isn't this an unnecessary thing?\nCurrently, the if (!StandbyMode) condition blocks the function calls.\nAnd I'm also a little concerned that we move the StandbyMode variable\nto startup.c which so far tiled to xlogrecovery.c. Maybe these are not\nreally concerns at all. Maybe others are okay with this approach.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 17 Nov 2022 12:52:46 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Thu, Nov 17, 2022 at 2:22 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Duplication is a problem that I agree with and I have an idea here -\n> how about introducing a new function, say EnableStandbyMode() that\n> sets StandbyMode to true and disables the startup progress timeout,\n> something like the attached?\n\nThat works for me, more or less. But I think that\nenable_startup_progress_timeout should be amended to either say if\n(log_startup_progress_interval == 0 || StandbyMode) return; or else it\nshould at least Assert(!StandbyMode), so that we can't accidentally\nre-enable the timer after we shut it off.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Nov 2022 14:12:12 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Fri, Nov 18, 2022 at 12:42 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Nov 17, 2022 at 2:22 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Duplication is a problem that I agree with and I have an idea here -\n> > how about introducing a new function, say EnableStandbyMode() that\n> > sets StandbyMode to true and disables the startup progress timeout,\n> > something like the attached?\n>\n> That works for me, more or less. But I think that\n> enable_startup_progress_timeout should be amended to either say if\n> (log_startup_progress_interval == 0 || StandbyMode) return; or else it\n> should at least Assert(!StandbyMode), so that we can't accidentally\n> re-enable the timer after we shut it off.\n\nHm, an assertion may not help in typical production servers running on\nnon-assert builds. I've modified the if condition, please see the\nattached v5 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 18 Nov 2022 15:55:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "At Fri, 18 Nov 2022 15:55:00 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Fri, Nov 18, 2022 at 12:42 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Thu, Nov 17, 2022 at 2:22 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > Duplication is a problem that I agree with and I have an idea here -\n> > > how about introducing a new function, say EnableStandbyMode() that\n> > > sets StandbyMode to true and disables the startup progress timeout,\n> > > something like the attached?\n> >\n> > That works for me, more or less. But I think that\n> > enable_startup_progress_timeout should be amended to either say if\n> > (log_startup_progress_interval == 0 || StandbyMode) return; or else it\n> > should at least Assert(!StandbyMode), so that we can't accidentally\n> > re-enable the timer after we shut it off.\n> \n> Hm, an assertion may not help in typical production servers running on\n> non-assert builds. I've modified the if condition, please see the\n> attached v5 patch.\n\nI prefer Robert's approach as it is more robust for future changes and\nsimple. I prefer to avoid this kind of piggy-backing and it doesn't\nseem to be needed in this case. XLogShutdownWalRcv() looks like a\nsimilar case to me and honestly I don't like it in the sense of\nrobustness but it is simpler than checking walreceiver status at every\nsite that refers to the flag.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 21 Nov 2022 11:20:00 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Sun, Nov 20, 2022 at 9:20 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> I prefer Robert's approach as it is more robust for future changes and\n> simple. I prefer to avoid this kind of piggy-backing and it doesn't\n> seem to be needed in this case. XLogShutdownWalRcv() looks like a\n> similar case to me and honestly I don't like it in the sense of\n> robustness but it is simpler than checking walreceiver status at every\n> site that refers to the flag.\n\nI don't understand what you want changed. Can you be more specific\nabout what you mean by \"Robert's approach\"?\n\nI don't agree with Bharath's logic for preferring an if-test to an\nAssert. There are some cases where we think we've written the code\ncorrectly but also recognize that the logic is complicated enough that\nsomething might slip through the cracks. Then, a runtime check makes\nsense, because otherwise something real bad might happen on a\nproduction instance. But here, I don't think that's the main risk. I\nthink the main risk is that some future patch tries to add code that\nshould print startup log messages later on. That would probably be a\ncoding mistake, and Assert would alert the patch author about that,\nwhereas amending the if-test would just make the code do something\ndifferently then the author intended.\n\nBut I don't feel super-strongly about this, which is why I mentioned\nboth options in my previous email. I'm not clear on whether you are\nexpressing an opinion on this point in particular or something more\ngeneral.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Nov 2022 12:07:45 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Mon, Nov 21, 2022 at 10:37 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sun, Nov 20, 2022 at 9:20 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > I prefer Robert's approach as it is more robust for future changes and\n> > simple. I prefer to avoid this kind of piggy-backing and it doesn't\n> > seem to be needed in this case. XLogShutdownWalRcv() looks like a\n> > similar case to me and honestly I don't like it in the sense of\n> > robustness but it is simpler than checking walreceiver status at every\n> > site that refers to the flag.\n>\n> I don't understand what you want changed. Can you be more specific\n> about what you mean by \"Robert's approach\"?\n>\n> I don't agree with Bharath's logic for preferring an if-test to an\n> Assert. There are some cases where we think we've written the code\n> correctly but also recognize that the logic is complicated enough that\n> something might slip through the cracks. Then, a runtime check makes\n> sense, because otherwise something real bad might happen on a\n> production instance. But here, I don't think that's the main risk. I\n> think the main risk is that some future patch tries to add code that\n> should print startup log messages later on. That would probably be a\n> coding mistake, and Assert would alert the patch author about that,\n> whereas amending the if-test would just make the code do something\n> differently then the author intended.\n>\n> But I don't feel super-strongly about this, which is why I mentioned\n> both options in my previous email. I'm not clear on whether you are\n> expressing an opinion on this point in particular or something more\n> general.\n\nIf we place just the Assert(!StandbyMode); in\nenable_startup_progress_timeout(), it fails for\nbegin_startup_progress_phase() in ResetUnloggedRelations() because the\nInitWalRecovery(), that sets StandbyMode to true, is called before\nResetUnloggedRelations() . However, with the if (StandbyMode) {\nreturn; }, we fail to report progress of ResetUnloggedRelations() in a\nstandby, which isn't a good idea at all because we only want to\ndisable the timeout during the recovery's main loop.\n\nI introduced an assert-only function returning true when we're in\nstandby's main redo apply loop and modified the assertion to be\nAssert(!InStandbyMainRedoApplyLoop()); works better but it complicates\nthe code a bit. FWIW, I'm attaching the v6 patch with this change.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 22 Nov 2022 16:35:04 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Tue, Nov 22, 2022 at 6:05 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> If we place just the Assert(!StandbyMode); in\n> enable_startup_progress_timeout(), it fails for\n> begin_startup_progress_phase() in ResetUnloggedRelations() because the\n> InitWalRecovery(), that sets StandbyMode to true, is called before\n> ResetUnloggedRelations() . However, with the if (StandbyMode) {\n> return; }, we fail to report progress of ResetUnloggedRelations() in a\n> standby, which isn't a good idea at all because we only want to\n> disable the timeout during the recovery's main loop.\n\nUgh. Well, in that case, I guess my vote is to forget about this whole\nAssert business and just commit what you had in v4. Does that work for\nyou?\n\nProtecting against specifically the situation where we're in the\nstandby's main redo apply loop is not really what I had in mind here,\nbut this is already sort of weirdly complicated-looking, and making it\nmore weirdly complicated-looking to achieve the kind of protection\nthat I had in mind doesn't really seem worth it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 Feb 2023 15:59:11 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Fri, Feb 3, 2023 at 2:29 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n\nThanks for looking at this.\n\n> On Tue, Nov 22, 2022 at 6:05 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > If we place just the Assert(!StandbyMode); in\n> > enable_startup_progress_timeout(), it fails for\n> > begin_startup_progress_phase() in ResetUnloggedRelations() because the\n> > InitWalRecovery(), that sets StandbyMode to true, is called before\n> > ResetUnloggedRelations() . However, with the if (StandbyMode) {\n> > return; }, we fail to report progress of ResetUnloggedRelations() in a\n> > standby, which isn't a good idea at all because we only want to\n> > disable the timeout during the recovery's main loop.\n>\n> Ugh. Well, in that case, I guess my vote is to forget about this whole\n> Assert business and just commit what you had in v4. Does that work for\n> you?\n\nYes, it seems reasonable to me.\n\n> Protecting against specifically the situation where we're in the\n> standby's main redo apply loop is not really what I had in mind here,\n> but this is already sort of weirdly complicated-looking, and making it\n> more weirdly complicated-looking to achieve the kind of protection\n> that I had in mind doesn't really seem worth it.\n\nIMHO, the responsibility of whether or not to report progress of any\noperation can lie with the developers writing features using the\nprogress reporting mechanism. The progress reporting mechanism can\njust be independent of all that.\n\nI took the v4 patch, added some comments and attached it as the v7\npatch here. Please find it.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 3 Feb 2023 09:52:37 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Thu, Feb 2, 2023 at 11:22 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> I took the v4 patch, added some comments and attached it as the v7\n> patch here. Please find it.\n\nCommitted and back-patched to v15.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Feb 2023 11:01:42 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Feb 2, 2023 at 11:22 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> I took the v4 patch, added some comments and attached it as the v7\n>> patch here. Please find it.\n\n> Committed and back-patched to v15.\n\nUmm ... is this really the sort of patch to be committing on a\nrelease wrap day?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 Feb 2023 11:07:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Mon, Feb 6, 2023 at 11:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Umm ... is this really the sort of patch to be committing on a\n> release wrap day?\n\nOh, shoot, I wasn't thinking about that. Would you like me to revert\nit in v15 for now?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Feb 2023 11:08:56 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Feb 6, 2023 at 11:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Umm ... is this really the sort of patch to be committing on a\n>> release wrap day?\n\n> Oh, shoot, I wasn't thinking about that. Would you like me to revert\n> it in v15 for now?\n\nYeah, seems like the safest course. I wouldn't object to it going in\nafter the release is over, but right now there's zero margin for error.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 Feb 2023 11:15:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Mon, Feb 6, 2023 at 11:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Mon, Feb 6, 2023 at 11:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Umm ... is this really the sort of patch to be committing on a\n> >> release wrap day?\n>\n> > Oh, shoot, I wasn't thinking about that. Would you like me to revert\n> > it in v15 for now?\n>\n> Yeah, seems like the safest course. I wouldn't object to it going in\n> after the release is over, but right now there's zero margin for error.\n\nDone.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Feb 2023 11:22:16 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Mon, Feb 6, 2023 at 9:39 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Feb 6, 2023 at 11:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Umm ... is this really the sort of patch to be committing on a\n> > release wrap day?\n>\n> Oh, shoot, I wasn't thinking about that. Would you like me to revert\n> it in v15 for now?\n\nThanks a lot Robert for taking care of this. The patch is committed on\nHEAD and reverted on v15. Now that the minor version branches are\nstamped, is it time for us to get this to v15? I can then close the CF\nentry - https://commitfest.postgresql.org/42/4012/.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 8 Feb 2023 22:30:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> Thanks a lot Robert for taking care of this. The patch is committed on\n> HEAD and reverted on v15. Now that the minor version branches are\n> stamped, is it time for us to get this to v15? I can then close the CF\n> entry - https://commitfest.postgresql.org/42/4012/.\n\nNo objection to un-reverting from here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Feb 2023 12:43:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" }, { "msg_contents": "On Wed, Feb 8, 2023 at 11:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > Thanks a lot Robert for taking care of this. The patch is committed on\n> > HEAD and reverted on v15. Now that the minor version branches are\n> > stamped, is it time for us to get this to v15? I can then close the CF\n> > entry - https://commitfest.postgresql.org/42/4012/.\n>\n> No objection to un-reverting from here.\n\nThanks Robert, Tom. It is now un-reverted on v15. I've closed the CF entry.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 11 Feb 2023 04:55:30 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: when the startup process doesn't (logging startup delays)" } ]
[ { "msg_contents": "Hello,\nI am new in PostgreSQL and I am trying to understand what the “test” word is representing in the archive_command configuration that the PostgreSQL documentation is showing as the format on how to set up this parameter\n\narchive_command = 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f' # Unix\n\nDoes anybody know what is “test” representing in this parameter configuration?\n\nThank in advance for your help on this.\n\nRegards,\nAllie\n\n\n\n\n\n\n\n\n\n\nHello,\nI am new in PostgreSQL and I am trying to understand what the “test” word is representing in the archive_command configuration that the PostgreSQL documentation is showing as the format on how to set up this parameter\n\n \n\narchive_command = 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'  # Unix\n\n \nDoes anybody know what is “test” representing in this parameter configuration?\n \nThank in advance for your help on this.\n \nRegards,\nAllie", "msg_date": "Mon, 19 Apr 2021 21:09:13 +0000", "msg_from": "Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>", "msg_from_op": true, "msg_subject": "archive_commnad parameter question" }, { "msg_contents": "On 4/19/21 2:09 PM, Allie Crawford wrote:\n> Hello,\n> \n> I am new in PostgreSQL and I am trying to understand what the �test� \n> word is representing in the archive_command configuration that the \n> PostgreSQL documentation is showing as the format on how to set up this \n> parameter\n> \n> archive_command = 'test ! -f /mnt/server/archivedir/%f && cp %p \n> /mnt/server/archivedir/%f'� # Unix\n> \n> Does anybody know what is �test� representing in this parameter \n> configuration?\n\nPer the docs:\n\n\"This is an example, not a recommendation, and might not work on all \nplatforms.\"\n\ntest in this case refers to a shell command:\n\nhttps://www.computerhope.com/unix/bash/test.htm\n\nSo this only works in environments that have that command.\n\n> \n> Thank in advance for your help on this.\n> \n> Regards,\n> \n> Allie\n> \n\n\n-- \nAdrian Klaver\nadrian.klaver@aklaver.com\n\n\n", "msg_date": "Mon, 19 Apr 2021 14:16:41 -0700", "msg_from": "Adrian Klaver <adrian.klaver@aklaver.com>", "msg_from_op": false, "msg_subject": "Re: archive_commnad parameter question" }, { "msg_contents": "On 20 Apr 2021, at 7:09, Allie Crawford wrote:\n\n> archive_command = 'test ! -f /mnt/server/archivedir/%f && cp %p \n> /mnt/server/archivedir/%f' # Unix\n>\n> Does anybody know what is “test” representing in this parameter \n> configuration?\n>\nmy_unix_prompt> man test\n\ngives:\n\t Tests the expression given and sets the exit status to 0 if true, \nand 1 if false. An expression is made up\n\t of one or more operators and their arguments.\n\nIn other words “test” here is a unix command which evaluated the \nexpression supplied via the arguments.\nMostly used in its alternate form of “[ … ]” in shell scripts\n\nGavan Schneider\n——\nGavan Schneider, Sodwalls, NSW, Australia\nExplanations exist; they have existed for all time; there is always a \nwell-known solution to every human problem — neat, plausible, and \nwrong.\n— H. L. Mencken, 1920\n\n\n", "msg_date": "Tue, 20 Apr 2021 07:16:56 +1000", "msg_from": "Gavan Schneider <list.pg.gavan@pendari.org>", "msg_from_op": false, "msg_subject": "Re: archive_commnad parameter question" }, { "msg_contents": "On 2021-04-19 21:09:13 +0000, Allie Crawford wrote:\n> I am new in PostgreSQL and I am trying to understand what the “test” word is\n> representing in the archive_command configuration that the PostgreSQL\n> documentation is showing as the format on how to set up this parameter\n> \n> archive_command = 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/\n> archivedir/%f' # Unix\n> \n> Does anybody know what is “test” representing in this parameter configuration?\n\n\"test\" is a unix command for testing stuff (as the name implies).\n\"test -f\" in particular tests whether the argument exists and is a\nregular file) and the \"!\" inverts the result.\n\nSo the whole line checks that the target *doesn't* already exist before\nattempting to copy over it.\n\n hp\n\n-- \n _ | Peter J. Holzer | Story must make more sense than reality.\n|_|_) | |\n| | | hjp@hjp.at | -- Charles Stross, \"Creative writing\n__/ | http://www.hjp.at/ | challenge!\"", "msg_date": "Mon, 19 Apr 2021 23:18:45 +0200", "msg_from": "\"Peter J. Holzer\" <hjp-pgsql@hjp.at>", "msg_from_op": false, "msg_subject": "Re: archive_commnad parameter question" }, { "msg_contents": "On Mon, 2021-04-19 at 21:09 +0000, Allie Crawford wrote:\n> Hello,\n> I am new in PostgreSQL and I am trying to understand what the “test” word is\n> representing in the archive_command configuration that the PostgreSQL\n> documentation is showing as the format on how to set up this parameter\n>  \n> archive_command = 'test ! -f /mnt/server/archivedir/%f && cp %p\n> /mnt/server/archivedir/%f'  # Unix\n>  \n> Does anybody know what is “test” representing in this parameter\n> configuration?\n\n'test' in this case is an actual executable present on many Unix and Unix-like\nsystems.\n\nIn this case it effectively gates the copy (cp) command so that it only runs\nif the target file does not already exist. \n\n\n\n\n\n\nOn Mon, 2021-04-19 at 21:09 +0000, Allie Crawford wrote:Hello,I am new in PostgreSQL and I am trying to understand what the “test” word is representing in the archive_command configuration that the PostgreSQL documentation is showing as the format on how to set up this parameter archive_command = 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'  # Unix Does anybody know what is “test” representing in this parameter configuration?'test' in this case is an actual executable present on many Unix and Unix-like systems.In this case it effectively gates the copy (cp) command so that it only runs if the target file does not already exist.", "msg_date": "Mon, 19 Apr 2021 14:19:38 -0700", "msg_from": "Alan Hodgson <ahodgson@lists.simkin.ca>", "msg_from_op": false, "msg_subject": "Re: archive_commnad parameter question" }, { "msg_contents": "Hi All,\nI have implemented Stream replication in one of my environments, and for some reason even though all the health checks are showing that the replication is working, when I run manual tests to see if changes are being replicated, the changes are not replicated to the standby postgresql environment. I have been researching for two day and I cannot find any documentation that talks about the case I am running into. I will appreciate if anybody could take a look at the details I have detailed below and give me some guidance on where the problem might be that is preventing my changes for being replicated. Even though I was able to instantiate the standby while firewalld was enabled, I decided to disable it just in case that it was causing any issue to the manual changes, but disabling firewalld has not had any effect, I am still not able to get the manual changes test to be replicated to the standby site. As you will see in the details below, the streaming is working, both sites are in sync to the latest WAL but for some reasons the latest changes are not on the standby site. How is it possible that the standby site is completely in sync but yet does not contain the latest changes?\n\nThanks in advance for any help you can give me with this problem.\n\nRegards,\nAllie\n\nDetails:\n\nMaster postgresql Environment\n\npostgresql=# select * from pg_stat_replication;\n\n-[ RECORD 1 ]----+------------------------------\n\npid | 1979089\n\nusesysid | 16404\n\nusename | replacct\n\napplication_name | walreceiver\n\nclient_addr | <standby server IP>\n\nclient_hostname | <standby server name>\n\nclient_port | 55096\n\nbackend_start | 2022-01-06 17:29:51.542784-07\n\nbackend_xmin |\n\nstate | streaming\n\nsent_lsn | 0/35000788\n\nwrite_lsn | 0/35000788\n\nflush_lsn | 0/35000788\n\nreplay_lsn | 0/31000500\n\nwrite_lag | 00:00:00.001611\n\nflush_lag | 00:00:00.001693\n\nreplay_lag | 20:38:47.00904\n\nsync_priority | 1\n\nsync_state | sync\n\nreply_time | 2022-01-07 14:11:58.996277-07\n\n\n\npostgresql=#\n\n\npostgresql=# select * from pg_roles;\n\n rolname | rolsuper | rolinherit | rolcreaterole | rolcreatedb | rolcanlogin | rolreplication | rolconnlimit | rolpassword | rolvaliduntil | rolbypassrls | rolconfig | oid\n\n---------------------------+----------+------------+---------------+-------------+-------------+----------------+--------------+-------------+---------------+--------------+-----------+-------\n\n postgresql | t | t | t | t | t | t | -1 | ******** | | t | | 10\n\n pg_monitor | f | t | f | f | f | f | -1 | ******** | | f | | 3373\n\n pg_read_all_settings | f | t | f | f | f | f | -1 | ******** | | f | | 3374\n\n pg_read_all_stats | f | t | f | f | f | f | -1 | ******** | | f | | 3375\n\n pg_stat_scan_tables | f | t | f | f | f | f | -1 | ******** | | f | | 3377\n\n pg_read_server_files | f | t | f | f | f | f | -1 | ******** | | f | | 4569\n\n pg_write_server_files | f | t | f | f | f | f | -1 | ******** | | f | | 4570\n\n pg_execute_server_program | f | t | f | f | f | f | -1 | ******** | | f | | 4571\n\n pg_signal_backend | f | t | f | f | f | f | -1 | ******** | | f | | 4200\n\n replacct | t | t | t | t | t | t | -1 | ******** | | t | | 16404\n\n(10 rows)\n\n\n\npostgresql=#\n\n\npostgresql=# create database test_replication_3;\n\nCREATE DATABASE\n\npostgresql=#\n\n\n\npostgresql=# select datname from pg_database;\n\n datname\n\n--------------------\n\n postgres\n\n postgresql\n\n template1\n\n template0\n\n stream\n\n test_replication\n\n test_replication_2\n\n test_replication_3\n\n(8 rows)\n\n\n\npostgresql=#\n\n\n\npostgresql=# SELECT pg_current_wal_lsn();\n\n pg_current_wal_lsn\n\n--------------------\n\n 0/35000788\n\n(1 row)\n\n\n\npostgresql=#\n\n\nStandby postgresql Environment\npostgresql=# select * from pg_stat_wal_receiver;\n-[ RECORD 1 ]---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npid | 17340\nstatus | streaming\nreceive_start_lsn | 0/30000000\nreceive_start_tli | 1\nwritten_lsn | 0/35000788\nflushed_lsn | 0/35000788\nreceived_tli | 1\nlast_msg_send_time | 2022-01-07 14:09:48.766823-07\nlast_msg_receipt_time | 2022-01-07 14:09:48.767581-07\nlatest_end_lsn | 0/35000788\nlatest_end_time | 2022-01-07 14:08:48.663693-07\nslot_name | wal_req_x_replica\nsender_host | <Master Server IP>\nsender_port | <Master server postgresql port#>\nconninfo | user=replacct password=******** channel_binding=prefer dbname=replication host=<Master server IP> port=<postgresql port#> fallback_application_name=walreceiver sslmode=prefer sslcompression=0 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres target_session_attrs=any\n\npostgresql=#\n\npostgresql=# select datname from pg_database;\n datname\n------------\n postgres\n postgresql\n template1\n template0\n stream\n(5 rows)\n\npostgresql=# select pg_last_wal_receive_lsn();\n pg_last_wal_receive_lsn\n-------------------------\n 0/35000788\n(1 row)\n\npostgresql=#\n\n\n\n\n\n\n\n\n\n\n\nHi All,\nI have implemented Stream replication in one of my environments, and for some reason even though all the health checks are showing that the replication is working, when I run manual tests to see if changes\n are being replicated, the changes are not replicated to the standby postgresql environment. I have been researching for two day and I cannot find any documentation that talks about the case I am running into. I will appreciate if anybody could take a look\n at the details I have detailed below and give me some guidance on where the problem might be that is preventing my changes for being replicated. Even though I was able to instantiate the standby while firewalld was enabled, I decided to disable it just in\n case that it was causing any issue to the manual changes, but disabling firewalld has not had any effect, I am still not able to get the manual changes test to be replicated to the standby site. As you will see in the details below, the streaming is working,\n both sites are in sync to the latest WAL but for some reasons the latest changes are not on the standby site. How is it possible that the standby site is completely in sync but yet does not contain the latest changes?\n \nThanks in advance for any help you can give me with this problem.\n \nRegards,\nAllie\n \nDetails:\n \nMaster\npostgresql Environment\npostgresql=# select * from pg_stat_replication;\n-[ RECORD 1 ]----+------------------------------\npid             \n| 1979089\nusesysid        \n| 16404\nusename         \n| replacct\napplication_name | walreceiver\nclient_addr     \n| <standby server IP>\nclient_hostname \n| <standby server name>\nclient_port     \n| 55096\nbackend_start   \n| 2022-01-06 17:29:51.542784-07\nbackend_xmin    \n| \nstate           \n| streaming\nsent_lsn        \n| 0/35000788\nwrite_lsn       \n| 0/35000788\nflush_lsn       \n| 0/35000788\nreplay_lsn      \n| 0/31000500\nwrite_lag       \n| 00:00:00.001611\nflush_lag       \n| 00:00:00.001693\nreplay_lag      \n| 20:38:47.00904\nsync_priority   \n| 1\nsync_state      \n| sync\nreply_time      \n| 2022-01-07 14:11:58.996277-07\n \npostgresql=#\n \npostgresql=# select * from pg_roles;\n          rolname         \n| rolsuper | rolinherit | rolcreaterole | rolcreatedb | rolcanlogin | rolreplication | rolconnlimit | rolpassword | rolvaliduntil | rolbypassrls | rolconfig | \noid  \n---------------------------+----------+------------+---------------+-------------+-------------+----------------+--------------+-------------+---------------+--------------+-----------+-------\n postgresql               \n| t        \n| t          | t\n            | t\n          | t \n          | t             \n|           \n-1 | ********    \n|               \n| t            \n|           |   \n10\n pg_monitor               \n| f        \n| t          | f\n            | f\n          | f \n          | f             \n|           \n-1 | ********    \n|               \n| f            \n|           | \n3373\n pg_read_all_settings     \n| f        \n| t          | f\n            | f\n          | f \n          | f             \n|           \n-1 | ********    \n|               \n| f            \n|           | \n3374\n pg_read_all_stats\n        | f       \n| t          \n| f             \n| f           \n| f           \n| f              \n|           -1 | ********   \n|              \n| f           \n|           \n|  3375\n pg_stat_scan_tables\n      | f       \n| t          \n| f             \n| f           \n| f           \n| f              \n|           -1 | ********   \n|              \n| f           \n|           \n|  3377\n pg_read_server_files     \n| f        \n| t          | f\n            | f\n          | f \n          | f             \n|           \n-1 | ********    \n|               \n| f            \n|           | \n4569\n pg_write_server_files\n    | f       \n| t          \n| f             \n| f           \n| f           \n| f              \n|           -1 | ********   \n|              \n| f           \n|           \n|  4570\n pg_execute_server_program | f       \n| t          \n| f             \n| f           \n| f           \n| f              \n|           -1 | ********   \n|              \n| f           \n|           \n|  4571\n pg_signal_backend\n        | f       \n| t          \n| f             \n| f           \n| f           \n| f              \n|           -1 | ********   \n|              \n| f           \n|           \n|  4200\n replacct                 \n| t        \n| t          | t\n            | t\n          | t \n          | t             \n|           \n-1 | ********    \n|               \n| t            \n|           | 16404\n(10 rows)\n \npostgresql=#\n \npostgresql=# create database test_replication_3;\nCREATE DATABASE\npostgresql=#\n \npostgresql=# select datname from pg_database;\n      datname       \n--------------------\n postgres\n postgresql\n template1\n template0\n stream\n test_replication\n test_replication_2\n test_replication_3\n(8 rows)\n \npostgresql=#\n \npostgresql=# SELECT pg_current_wal_lsn();\n pg_current_wal_lsn \n--------------------\n 0/35000788\n(1 row)\n \npostgresql=#\n \n\n \nStandby\npostgresql Environment\npostgresql=# select * from pg_stat_wal_receiver;\n-[ RECORD 1 ]---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npid                   | 17340\nstatus                | streaming\nreceive_start_lsn     | 0/30000000\nreceive_start_tli     | 1\nwritten_lsn           | 0/35000788\nflushed_lsn           | 0/35000788\nreceived_tli          | 1\nlast_msg_send_time    | 2022-01-07 14:09:48.766823-07\nlast_msg_receipt_time | 2022-01-07 14:09:48.767581-07\nlatest_end_lsn        | 0/35000788\nlatest_end_time       | 2022-01-07 14:08:48.663693-07\nslot_name             | wal_req_x_replica\nsender_host           | <Master Server IP>\nsender_port           | <Master server postgresql port#>\nconninfo              | user=replacct password=******** channel_binding=prefer dbname=replication host=<Master server IP> port=<postgresql port#> fallback_application_name=walreceiver\n sslmode=prefer sslcompression=0 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres target_session_attrs=any\n \npostgresql=#\n \npostgresql=# select datname from pg_database;\n  datname   \n------------\n postgres\n postgresql\n template1\n template0\n stream\n(5 rows)\n \npostgresql=# select pg_last_wal_receive_lsn();\n pg_last_wal_receive_lsn \n-------------------------\n 0/35000788\n(1 row)\n \npostgresql=#", "msg_date": "Fri, 7 Jan 2022 21:57:28 +0000", "msg_from": "Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>", "msg_from_op": true, "msg_subject": "Stream Replication not working" }, { "msg_contents": "Hi All,\nI have implemented Stream replication in one of my environments, and for some reason even though all the health checks are showing that the replication is working, when I run manual tests to see if changes are being replicated, the changes are not replicated to the standby postgresql environment. I have been researching for two day and I cannot find any documentation that talks about the case I am running into. I will appreciate if anybody could take a look at the details I have detailed below and give me some guidance on where the problem might be that is preventing my changes for being replicated. Even though I was able to instantiate the standby while firewalld was enabled, I decided to disable it just in case that it was causing any issue to the manual changes, but disabling firewalld has not had any effect, I am still not able to get the manual changes test to be replicated to the standby site. As you will see in the details below, the streaming is working, both sites are in sync to the latest WAL but for some reasons the latest changes are not on the standby site. How is it possible that the standby site is completely in sync but yet does not contain the latest changes?\n\nThanks in advance for any help you can give me with this problem.\n\nRegards,\nAllie\n\nDetails:\n\nMaster postgresql Environment\n\npostgresql=# select * from pg_stat_replication;\n\n-[ RECORD 1 ]----+------------------------------\n\npid | 1979089\n\nusesysid | 16404\n\nusename | replacct\n\napplication_name | walreceiver\n\nclient_addr | <standby server IP>\n\nclient_hostname | <standby server name>\n\nclient_port | 55096\n\nbackend_start | 2022-01-06 17:29:51.542784-07\n\nbackend_xmin |\n\nstate | streaming\n\nsent_lsn | 0/35000788\n\nwrite_lsn | 0/35000788\n\nflush_lsn | 0/35000788\n\nreplay_lsn | 0/31000500\n\nwrite_lag | 00:00:00.001611\n\nflush_lag | 00:00:00.001693\n\nreplay_lag | 20:38:47.00904\n\nsync_priority | 1\n\nsync_state | sync\n\nreply_time | 2022-01-07 14:11:58.996277-07\n\n\n\npostgresql=#\n\n\npostgresql=# select * from pg_roles;\n\n rolname | rolsuper | rolinherit | rolcreaterole | rolcreatedb | rolcanlogin | rolreplication | rolconnlimit | rolpassword | rolvaliduntil | rolbypassrls | rolconfig | oid\n\n---------------------------+----------+------------+---------------+-------------+-------------+----------------+--------------+-------------+---------------+--------------+-----------+-------\n\n postgresql | t | t | t | t | t | t | -1 | ******** | | t | | 10\n\n pg_monitor | f | t | f | f | f | f | -1 | ******** | | f | | 3373\n\n pg_read_all_settings | f | t | f | f | f | f | -1 | ******** | | f | | 3374\n\n pg_read_all_stats | f | t | f | f | f | f | -1 | ******** | | f | | 3375\n\n pg_stat_scan_tables | f | t | f | f | f | f | -1 | ******** | | f | | 3377\n\n pg_read_server_files | f | t | f | f | f | f | -1 | ******** | | f | | 4569\n\n pg_write_server_files | f | t | f | f | f | f | -1 | ******** | | f | | 4570\n\n pg_execute_server_program | f | t | f | f | f | f | -1 | ******** | | f | | 4571\n\n pg_signal_backend | f | t | f | f | f | f | -1 | ******** | | f | | 4200\n\n replacct | t | t | t | t | t | t | -1 | ******** | | t | | 16404\n\n(10 rows)\n\n\n\npostgresql=#\n\n\npostgresql=# create database test_replication_3;\n\nCREATE DATABASE\n\npostgresql=#\n\n\n\npostgresql=# select datname from pg_database;\n\n datname\n\n--------------------\n\n postgres\n\n postgresql\n\n template1\n\n template0\n\n stream\n\n test_replication\n\n test_replication_2\n\n test_replication_3\n\n(8 rows)\n\n\n\npostgresql=#\n\n\n\npostgresql=# SELECT pg_current_wal_lsn();\n\n pg_current_wal_lsn\n\n--------------------\n\n 0/35000788\n\n(1 row)\n\n\n\npostgresql=#\n\n\nStandby postgresql Environment\npostgresql=# select * from pg_stat_wal_receiver;\n-[ RECORD 1 ]---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npid | 17340\nstatus | streaming\nreceive_start_lsn | 0/30000000\nreceive_start_tli | 1\nwritten_lsn | 0/35000788\nflushed_lsn | 0/35000788\nreceived_tli | 1\nlast_msg_send_time | 2022-01-07 14:09:48.766823-07\nlast_msg_receipt_time | 2022-01-07 14:09:48.767581-07\nlatest_end_lsn | 0/35000788\nlatest_end_time | 2022-01-07 14:08:48.663693-07\nslot_name | wal_req_x_replica\nsender_host | <Master Server IP>\nsender_port | <Master server postgresql port#>\nconninfo | user=replacct password=******** channel_binding=prefer dbname=replication host=<Master server IP> port=<postgresql port#> fallback_application_name=walreceiver sslmode=prefer sslcompression=0 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres target_session_attrs=any\n\npostgresql=#\n\npostgresql=# select datname from pg_database;\n datname\n------------\n postgres\n postgresql\n template1\n template0\n stream\n(5 rows)\n\npostgresql=# select pg_last_wal_receive_lsn();\n pg_last_wal_receive_lsn\n-------------------------\n 0/35000788\n(1 row)\n\npostgresql=#\n\n\n\n\n\n\n\n\n\n\n\nHi All,\nI have implemented Stream replication in one of my environments, and for some reason even though all the health checks are showing that the replication is working, when I run manual tests to see if changes\n are being replicated, the changes are not replicated to the standby postgresql environment. I have been researching for two day and I cannot find any documentation that talks about the case I am running into. I will appreciate if anybody could take a look\n at the details I have detailed below and give me some guidance on where the problem might be that is preventing my changes for being replicated. Even though I was able to instantiate the standby while firewalld was enabled, I decided to disable it just in\n case that it was causing any issue to the manual changes, but disabling firewalld has not had any effect, I am still not able to get the manual changes test to be replicated to the standby site. As you will see in the details below, the streaming is working,\n both sites are in sync to the latest WAL but for some reasons the latest changes are not on the standby site. How is it possible that the standby site is completely in sync but yet does not contain the latest changes?\n \nThanks in advance for any help you can give me with this problem.\n \nRegards,\nAllie\n \nDetails:\n \nMaster\npostgresql Environment\npostgresql=# select * from pg_stat_replication;\n-[ RECORD 1 ]----+------------------------------\npid             \n| 1979089\nusesysid        \n| 16404\nusename         \n| replacct\napplication_name | walreceiver\nclient_addr     \n| <standby server IP>\nclient_hostname \n| <standby server name>\nclient_port     \n| 55096\nbackend_start   \n| 2022-01-06 17:29:51.542784-07\nbackend_xmin    \n| \nstate           \n| streaming\nsent_lsn        \n| 0/35000788\nwrite_lsn       \n| 0/35000788\nflush_lsn       \n| 0/35000788\nreplay_lsn      \n| 0/31000500\nwrite_lag       \n| 00:00:00.001611\nflush_lag       \n| 00:00:00.001693\nreplay_lag      \n| 20:38:47.00904\nsync_priority   \n| 1\nsync_state      \n| sync\nreply_time      \n| 2022-01-07 14:11:58.996277-07\n \npostgresql=#\n \npostgresql=# select * from pg_roles;\n          rolname         \n| rolsuper | rolinherit | rolcreaterole | rolcreatedb | rolcanlogin | rolreplication | rolconnlimit | rolpassword | rolvaliduntil | rolbypassrls | rolconfig | \noid  \n---------------------------+----------+------------+---------------+-------------+-------------+----------------+--------------+-------------+---------------+--------------+-----------+-------\n postgresql               \n| t        \n| t          | t\n            | t\n          | t \n          | t             \n|           \n-1 | ********    \n|               \n| t            \n|           |   \n10\n pg_monitor               \n| f        \n| t          | f\n            | f\n          | f \n          | f             \n|           \n-1 | ********    \n|               \n| f            \n|           | \n3373\n pg_read_all_settings     \n| f        \n| t          | f\n            | f\n          | f \n          | f             \n|           \n-1 | ********    \n|               \n| f            \n|           | \n3374\n pg_read_all_stats\n        | f       \n| t          \n| f             \n| f           \n| f           \n| f              \n|           -1 | ********   \n|              \n| f           \n|           \n|  3375\n pg_stat_scan_tables\n      | f       \n| t          \n| f             \n| f           \n| f           \n| f              \n|           -1 | ********   \n|              \n| f           \n|           \n|  3377\n pg_read_server_files     \n| f        \n| t          | f\n            | f\n          | f \n          | f             \n|           \n-1 | ********    \n|               \n| f            \n|           | \n4569\n pg_write_server_files\n    | f       \n| t          \n| f             \n| f           \n| f           \n| f              \n|           -1 | ********   \n|              \n| f           \n|           \n|  4570\n pg_execute_server_program | f       \n| t          \n| f             \n| f           \n| f           \n| f              \n|           -1 | ********   \n|              \n| f           \n|           \n|  4571\n pg_signal_backend\n        | f       \n| t          \n| f             \n| f           \n| f           \n| f              \n|           -1 | ********   \n|              \n| f           \n|           \n|  4200\n replacct                 \n| t        \n| t          | t\n            | t\n          | t \n          | t             \n|           \n-1 | ********    \n|               \n| t            \n|           | 16404\n(10 rows)\n \npostgresql=#\n \npostgresql=# create database test_replication_3;\nCREATE DATABASE\npostgresql=#\n \npostgresql=# select datname from pg_database;\n      datname       \n--------------------\n postgres\n postgresql\n template1\n template0\n stream\n test_replication\n test_replication_2\n test_replication_3\n(8 rows)\n \npostgresql=#\n \npostgresql=# SELECT pg_current_wal_lsn();\n pg_current_wal_lsn \n--------------------\n 0/35000788\n(1 row)\n \npostgresql=#\n \n\n \nStandby\npostgresql Environment\npostgresql=# select * from pg_stat_wal_receiver;\n-[ RECORD 1 ]---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npid                   | 17340\nstatus                | streaming\nreceive_start_lsn     | 0/30000000\nreceive_start_tli     | 1\nwritten_lsn           | 0/35000788\nflushed_lsn           | 0/35000788\nreceived_tli          | 1\nlast_msg_send_time    | 2022-01-07 14:09:48.766823-07\nlast_msg_receipt_time | 2022-01-07 14:09:48.767581-07\nlatest_end_lsn        | 0/35000788\nlatest_end_time       | 2022-01-07 14:08:48.663693-07\nslot_name             | wal_req_x_replica\nsender_host           | <Master Server IP>\nsender_port           | <Master server postgresql port#>\nconninfo              | user=replacct password=******** channel_binding=prefer dbname=replication host=<Master server IP> port=<postgresql port#> fallback_application_name=walreceiver\n sslmode=prefer sslcompression=0 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres target_session_attrs=any\n \npostgresql=#\n \npostgresql=# select datname from pg_database;\n  datname   \n------------\n postgres\n postgresql\n template1\n template0\n stream\n(5 rows)\n \npostgresql=# select pg_last_wal_receive_lsn();\n pg_last_wal_receive_lsn \n-------------------------\n 0/35000788\n(1 row)\n \npostgresql=#", "msg_date": "Mon, 10 Jan 2022 19:53:42 +0000", "msg_from": "Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>", "msg_from_op": true, "msg_subject": "Stream Replication not working" }, { "msg_contents": "Seems there is a problem with the replay on your standby. Either it is too\nslow or stuck behind some locks ( replay_lag of 20:38:47.00904 indicates\nthis and the flush_lsn is the same as lsn on primary ) . Run pg_locks to\nsee if the replay is stuck behind a lock.\n\n\n\nOn Mon, Jan 10, 2022 at 11:53 AM Allie Crawford <\nCrawfordMA@churchofjesuschrist.org> wrote:\n\n> Hi All,\n>\n> I have implemented Stream replication in one of my environments, and for\n> some reason even though all the health checks are showing that the\n> replication is working, when I run manual tests to see if changes are being\n> replicated, the changes are not replicated to the standby postgresql\n> environment. I have been researching for two day and I cannot find any\n> documentation that talks about the case I am running into. I will\n> appreciate if anybody could take a look at the details I have detailed\n> below and give me some guidance on where the problem might be that is\n> preventing my changes for being replicated. Even though I was able to\n> instantiate the standby while firewalld was enabled, I decided to disable\n> it just in case that it was causing any issue to the manual changes, but\n> disabling firewalld has not had any effect, I am still not able to get the\n> manual changes test to be replicated to the standby site. As you will see\n> in the details below, the streaming is working, both sites are in sync to\n> the latest WAL but for some reasons the latest changes are not on the\n> standby site. How is it possible that the standby site is completely in\n> sync but yet does not contain the latest changes?\n>\n>\n>\n> Thanks in advance for any help you can give me with this problem.\n>\n>\n>\n> Regards,\n>\n> Allie\n>\n>\n>\n> *Details:*\n>\n>\n>\n> *Master **postgresql Environment*\n>\n> postgresql=# select * from pg_stat_replication;\n>\n> -[ RECORD 1 ]----+------------------------------\n>\n> pid | 1979089\n>\n> usesysid | 16404\n>\n> usename | replacct\n>\n> application_name | walreceiver\n>\n> client_addr | <standby server IP>\n>\n> client_hostname | <standby server name>\n>\n> client_port | 55096\n>\n> backend_start | 2022-01-06 17:29:51.542784-07\n>\n> backend_xmin |\n>\n> state | streaming\n>\n> sent_lsn | 0/35000788\n>\n> write_lsn | 0/35000788\n>\n> flush_lsn | 0/35000788\n>\n> replay_lsn | 0/31000500\n>\n> write_lag | 00:00:00.001611\n>\n> flush_lag | 00:00:00.001693\n>\n> replay_lag | 20:38:47.00904\n>\n> sync_priority | 1\n>\n> sync_state | sync\n>\n> reply_time | 2022-01-07 14:11:58.996277-07\n>\n>\n>\n> postgresql=#\n>\n>\n>\n> postgresql=# select * from pg_roles;\n>\n> rolname | rolsuper | rolinherit | rolcreaterole |\n> rolcreatedb | rolcanlogin | rolreplication | rolconnlimit | rolpassword |\n> rolvaliduntil | rolbypassrls | rolconfig | oid\n>\n>\n> ---------------------------+----------+------------+---------------+-------------+-------------+----------------+--------------+-------------+---------------+--------------+-----------+-------\n>\n> postgresql | t | t | t | t\n> | t | t | -1 | ******** |\n> | t | | 10\n>\n> pg_monitor | f | t | f | f\n> | f | f | -1 | ******** |\n> | f | | 3373\n>\n> pg_read_all_settings | f | t | f | f\n> | f | f | -1 | ******** |\n> | f | | 3374\n>\n> pg_read_all_stats | f | t | f | f\n> | f | f | -1 | ******** |\n> | f | | 3375\n>\n> pg_stat_scan_tables | f | t | f | f\n> | f | f | -1 | ******** |\n> | f | | 3377\n>\n> pg_read_server_files | f | t | f | f\n> | f | f | -1 | ******** |\n> | f | | 4569\n>\n> pg_write_server_files | f | t | f | f\n> | f | f | -1 | ******** |\n> | f | | 4570\n>\n> pg_execute_server_program | f | t | f | f\n> | f | f | -1 | ******** |\n> | f | | 4571\n>\n> pg_signal_backend | f | t | f | f\n> | f | f | -1 | ******** |\n> | f | | 4200\n>\n> replacct | t | t | t | t\n> | t | t | -1 | ******** |\n> | t | | 16404\n>\n> (10 rows)\n>\n>\n>\n> postgresql=#\n>\n>\n>\n> postgresql=# create database test_replication_3;\n>\n> CREATE DATABASE\n>\n> postgresql=#\n>\n>\n>\n> postgresql=# select datname from pg_database;\n>\n> datname\n>\n> --------------------\n>\n> postgres\n>\n> postgresql\n>\n> template1\n>\n> template0\n>\n> stream\n>\n> test_replication\n>\n> test_replication_2\n>\n> test_replication_3\n>\n> (8 rows)\n>\n>\n>\n> postgresql=#\n>\n>\n>\n> postgresql=# SELECT pg_current_wal_lsn();\n>\n> pg_current_wal_lsn\n>\n> --------------------\n>\n> 0/35000788\n>\n> (1 row)\n>\n>\n>\n> postgresql=#\n>\n>\n>\n>\n>\n> *Standby **postgresql Environment*\n>\n> postgresql=# select * from pg_stat_wal_receiver;\n>\n> -[ RECORD 1\n> ]---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> pid | 17340\n>\n> status | streaming\n>\n> receive_start_lsn | 0/30000000\n>\n> receive_start_tli | 1\n>\n> written_lsn | 0/35000788\n>\n> flushed_lsn | 0/35000788\n>\n> received_tli | 1\n>\n> last_msg_send_time | 2022-01-07 14:09:48.766823-07\n>\n> last_msg_receipt_time | 2022-01-07 14:09:48.767581-07\n>\n> latest_end_lsn | 0/35000788\n>\n> latest_end_time | 2022-01-07 14:08:48.663693-07\n>\n> slot_name | wal_req_x_replica\n>\n> sender_host | <Master Server IP>\n>\n> sender_port | <Master server postgresql port#>\n>\n> conninfo | user=replacct password=********\n> channel_binding=prefer dbname=replication host=<Master server IP>\n> port=<postgresql port#> fallback_application_name=walreceiver\n> sslmode=prefer sslcompression=0 ssl_min_protocol_version=TLSv1.2\n> gssencmode=prefer krbsrvname=postgres target_session_attrs=any\n>\n>\n>\n> postgresql=#\n>\n>\n>\n> postgresql=# select datname from pg_database;\n>\n> datname\n>\n> ------------\n>\n> postgres\n>\n> postgresql\n>\n> template1\n>\n> template0\n>\n> stream\n>\n> (5 rows)\n>\n>\n>\n> postgresql=# select pg_last_wal_receive_lsn();\n>\n> pg_last_wal_receive_lsn\n>\n> -------------------------\n>\n> 0/35000788\n>\n> (1 row)\n>\n>\n>\n> postgresql=#\n>\n>\n>\n>\n>\n\nSeems there is a problem with the replay on your standby. Either it is too slow or stuck behind some locks ( replay_lag of 20:38:47.00904 indicates this and the flush_lsn is the same as lsn on primary ) . Run pg_locks to see if the replay is stuck behind a lock.On Mon, Jan 10, 2022 at 11:53 AM Allie Crawford <CrawfordMA@churchofjesuschrist.org> wrote:\n\n\nHi All,\nI have implemented Stream replication in one of my environments, and for some reason even though all the health checks are showing that the replication is working, when I run manual tests to see if changes\n are being replicated, the changes are not replicated to the standby postgresql environment. I have been researching for two day and I cannot find any documentation that talks about the case I am running into. I will appreciate if anybody could take a look\n at the details I have detailed below and give me some guidance on where the problem might be that is preventing my changes for being replicated. Even though I was able to instantiate the standby while firewalld was enabled, I decided to disable it just in\n case that it was causing any issue to the manual changes, but disabling firewalld has not had any effect, I am still not able to get the manual changes test to be replicated to the standby site. As you will see in the details below, the streaming is working,\n both sites are in sync to the latest WAL but for some reasons the latest changes are not on the standby site. How is it possible that the standby site is completely in sync but yet does not contain the latest changes?\n \nThanks in advance for any help you can give me with this problem.\n \nRegards,\nAllie\n \nDetails:\n \nMaster\npostgresql Environment\npostgresql=# select * from pg_stat_replication;\n-[ RECORD 1 ]----+------------------------------\npid             \n| 1979089\nusesysid        \n| 16404\nusename         \n| replacct\napplication_name | walreceiver\nclient_addr     \n| <standby server IP>\nclient_hostname \n| <standby server name>\nclient_port     \n| 55096\nbackend_start   \n| 2022-01-06 17:29:51.542784-07\nbackend_xmin    \n| \nstate           \n| streaming\nsent_lsn        \n| 0/35000788\nwrite_lsn       \n| 0/35000788\nflush_lsn       \n| 0/35000788\nreplay_lsn      \n| 0/31000500\nwrite_lag       \n| 00:00:00.001611\nflush_lag       \n| 00:00:00.001693\nreplay_lag      \n| 20:38:47.00904\nsync_priority   \n| 1\nsync_state      \n| sync\nreply_time      \n| 2022-01-07 14:11:58.996277-07\n \npostgresql=#\n \npostgresql=# select * from pg_roles;\n          rolname         \n| rolsuper | rolinherit | rolcreaterole | rolcreatedb | rolcanlogin | rolreplication | rolconnlimit | rolpassword | rolvaliduntil | rolbypassrls | rolconfig | \noid  \n---------------------------+----------+------------+---------------+-------------+-------------+----------------+--------------+-------------+---------------+--------------+-----------+-------\n postgresql               \n| t        \n| t          | t\n            | t\n          | t \n          | t             \n|           \n-1 | ********    \n|               \n| t            \n|           |   \n10\n pg_monitor               \n| f        \n| t          | f\n            | f\n          | f \n          | f             \n|           \n-1 | ********    \n|               \n| f            \n|           | \n3373\n pg_read_all_settings     \n| f        \n| t          | f\n            | f\n          | f \n          | f             \n|           \n-1 | ********    \n|               \n| f            \n|           | \n3374\n pg_read_all_stats\n        | f       \n| t          \n| f             \n| f           \n| f           \n| f              \n|           -1 | ********   \n|              \n| f           \n|           \n|  3375\n pg_stat_scan_tables\n      | f       \n| t          \n| f             \n| f           \n| f           \n| f              \n|           -1 | ********   \n|              \n| f           \n|           \n|  3377\n pg_read_server_files     \n| f        \n| t          | f\n            | f\n          | f \n          | f             \n|           \n-1 | ********    \n|               \n| f            \n|           | \n4569\n pg_write_server_files\n    | f       \n| t          \n| f             \n| f           \n| f           \n| f              \n|           -1 | ********   \n|              \n| f           \n|           \n|  4570\n pg_execute_server_program | f       \n| t          \n| f             \n| f           \n| f           \n| f              \n|           -1 | ********   \n|              \n| f           \n|           \n|  4571\n pg_signal_backend\n        | f       \n| t          \n| f             \n| f           \n| f           \n| f              \n|           -1 | ********   \n|              \n| f           \n|           \n|  4200\n replacct                 \n| t        \n| t          | t\n            | t\n          | t \n          | t             \n|           \n-1 | ********    \n|               \n| t            \n|           | 16404\n(10 rows)\n \npostgresql=#\n \npostgresql=# create database test_replication_3;\nCREATE DATABASE\npostgresql=#\n \npostgresql=# select datname from pg_database;\n      datname       \n--------------------\n postgres\n postgresql\n template1\n template0\n stream\n test_replication\n test_replication_2\n test_replication_3\n(8 rows)\n \npostgresql=#\n \npostgresql=# SELECT pg_current_wal_lsn();\n pg_current_wal_lsn \n--------------------\n 0/35000788\n(1 row)\n \npostgresql=#\n \n\n \nStandby\npostgresql Environment\npostgresql=# select * from pg_stat_wal_receiver;\n-[ RECORD 1 ]---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npid                   | 17340\nstatus                | streaming\nreceive_start_lsn     | 0/30000000\nreceive_start_tli     | 1\nwritten_lsn           | 0/35000788\nflushed_lsn           | 0/35000788\nreceived_tli          | 1\nlast_msg_send_time    | 2022-01-07 14:09:48.766823-07\nlast_msg_receipt_time | 2022-01-07 14:09:48.767581-07\nlatest_end_lsn        | 0/35000788\nlatest_end_time       | 2022-01-07 14:08:48.663693-07\nslot_name             | wal_req_x_replica\nsender_host           | <Master Server IP>\nsender_port           | <Master server postgresql port#>\nconninfo              | user=replacct password=******** channel_binding=prefer dbname=replication host=<Master server IP> port=<postgresql port#> fallback_application_name=walreceiver\n sslmode=prefer sslcompression=0 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres target_session_attrs=any\n \npostgresql=#\n \npostgresql=# select datname from pg_database;\n  datname   \n------------\n postgres\n postgresql\n template1\n template0\n stream\n(5 rows)\n \npostgresql=# select pg_last_wal_receive_lsn();\n pg_last_wal_receive_lsn \n-------------------------\n 0/35000788\n(1 row)\n \npostgresql=#", "msg_date": "Mon, 10 Jan 2022 12:06:12 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Stream Replication not working" }, { "msg_contents": "Thank you so much for your help on this Satya. I have detailed right below the output of the query you asked me to run.\n\nMaster\n\npostgresql@<master> ~>psql\n\npsql (13.5)\n\nType \"help\" for help.\n\n\n\npostgresql=# select * from pg_locks;\n\n locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath\n\n------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+---------+-----------------+---------+----------\n\n relation | 16384 | 12141 | | | | | | | | 3/6715 | 2669949 | AccessShareLock | t | t\n\n virtualxid | | | | | 3/6715 | | | | | 3/6715 | 2669949 | ExclusiveLock | t | t\n\n(2 rows)\n\n\n\npostgresql=#\n\n\nStandby\npostgresql@<standby> ~>psql\npsql (13.5)\nType \"help\" for help.\n\npostgresql=# select * from pg_locks;\n locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath\n------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+--------+-----------------+---------+----------\n relation | 16384 | 12141 | | | | | | | | 2/50 | 642064 | AccessShareLock | t | t\n virtualxid | | | | | 2/50 | | | | | 2/50 | 642064 | ExclusiveLock | t | t\n virtualxid | | | | | 1/1 | | | | | 1/0 | 17333 | ExclusiveLock | t | t\n(3 rows)\n\npostgresql=#\n\n\n\n\nFrom: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>\nDate: Monday, January 10, 2022 at 1:06 PM\nTo: Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: [Ext:] Re: Stream Replication not working\n[External Email]\nSeems there is a problem with the replay on your standby. Either it is too slow or stuck behind some locks ( replay_lag of 20:38:47.00904 indicates this and the flush_lsn is the same as lsn on primary ) . Run pg_locks to see if the replay is stuck behind a lock.\n\n\n\nOn Mon, Jan 10, 2022 at 11:53 AM Allie Crawford <CrawfordMA@churchofjesuschrist.org<mailto:CrawfordMA@churchofjesuschrist.org>> wrote:\nHi All,\nI have implemented Stream replication in one of my environments, and for some reason even though all the health checks are showing that the replication is working, when I run manual tests to see if changes are being replicated, the changes are not replicated to the standby postgresql environment. I have been researching for two day and I cannot find any documentation that talks about the case I am running into. I will appreciate if anybody could take a look at the details I have detailed below and give me some guidance on where the problem might be that is preventing my changes for being replicated. Even though I was able to instantiate the standby while firewalld was enabled, I decided to disable it just in case that it was causing any issue to the manual changes, but disabling firewalld has not had any effect, I am still not able to get the manual changes test to be replicated to the standby site. As you will see in the details below, the streaming is working, both sites are in sync to the latest WAL but for some reasons the latest changes are not on the standby site. How is it possible that the standby site is completely in sync but yet does not contain the latest changes?\n\nThanks in advance for any help you can give me with this problem.\n\nRegards,\nAllie\n\nDetails:\n\nMaster postgresql Environment\n\npostgresql=# select * from pg_stat_replication;\n\n-[ RECORD 1 ]----+------------------------------\n\npid | 1979089\n\nusesysid | 16404\n\nusename | replacct\n\napplication_name | walreceiver\n\nclient_addr | <standby server IP>\n\nclient_hostname | <standby server name>\n\nclient_port | 55096\n\nbackend_start | 2022-01-06 17:29:51.542784-07\n\nbackend_xmin |\n\nstate | streaming\n\nsent_lsn | 0/35000788\n\nwrite_lsn | 0/35000788\n\nflush_lsn | 0/35000788\n\nreplay_lsn | 0/31000500\n\nwrite_lag | 00:00:00.001611\n\nflush_lag | 00:00:00.001693\n\nreplay_lag | 20:38:47.00904\n\nsync_priority | 1\n\nsync_state | sync\n\nreply_time | 2022-01-07 14:11:58.996277-07\n\n\n\npostgresql=#\n\n\npostgresql=# select * from pg_roles;\n\n rolname | rolsuper | rolinherit | rolcreaterole | rolcreatedb | rolcanlogin | rolreplication | rolconnlimit | rolpassword | rolvaliduntil | rolbypassrls | rolconfig | oid\n\n---------------------------+----------+------------+---------------+-------------+-------------+----------------+--------------+-------------+---------------+--------------+-----------+-------\n\n postgresql | t | t | t | t | t | t | -1 | ******** | | t | | 10\n\n pg_monitor | f | t | f | f | f | f | -1 | ******** | | f | | 3373\n\n pg_read_all_settings | f | t | f | f | f | f | -1 | ******** | | f | | 3374\n\n pg_read_all_stats | f | t | f | f | f | f | -1 | ******** | | f | | 3375\n\n pg_stat_scan_tables | f | t | f | f | f | f | -1 | ******** | | f | | 3377\n\n pg_read_server_files | f | t | f | f | f | f | -1 | ******** | | f | | 4569\n\n pg_write_server_files | f | t | f | f | f | f | -1 | ******** | | f | | 4570\n\n pg_execute_server_program | f | t | f | f | f | f | -1 | ******** | | f | | 4571\n\n pg_signal_backend | f | t | f | f | f | f | -1 | ******** | | f | | 4200\n\n replacct | t | t | t | t | t | t | -1 | ******** | | t | | 16404\n\n(10 rows)\n\n\n\npostgresql=#\n\n\npostgresql=# create database test_replication_3;\n\nCREATE DATABASE\n\npostgresql=#\n\n\n\npostgresql=# select datname from pg_database;\n\n datname\n\n--------------------\n\n postgres\n\n postgresql\n\n template1\n\n template0\n\n stream\n\n test_replication\n\n test_replication_2\n\n test_replication_3\n\n(8 rows)\n\n\n\npostgresql=#\n\n\n\npostgresql=# SELECT pg_current_wal_lsn();\n\n pg_current_wal_lsn\n\n--------------------\n\n 0/35000788\n\n(1 row)\n\n\n\npostgresql=#\n\n\nStandby postgresql Environment\npostgresql=# select * from pg_stat_wal_receiver;\n-[ RECORD 1 ]---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npid | 17340\nstatus | streaming\nreceive_start_lsn | 0/30000000\nreceive_start_tli | 1\nwritten_lsn | 0/35000788\nflushed_lsn | 0/35000788\nreceived_tli | 1\nlast_msg_send_time | 2022-01-07 14:09:48.766823-07\nlast_msg_receipt_time | 2022-01-07 14:09:48.767581-07\nlatest_end_lsn | 0/35000788\nlatest_end_time | 2022-01-07 14:08:48.663693-07\nslot_name | wal_req_x_replica\nsender_host | <Master Server IP>\nsender_port | <Master server postgresql port#>\nconninfo | user=replacct password=******** channel_binding=prefer dbname=replication host=<Master server IP> port=<postgresql port#> fallback_application_name=walreceiver sslmode=prefer sslcompression=0 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres target_session_attrs=any\n\npostgresql=#\n\npostgresql=# select datname from pg_database;\n datname\n------------\n postgres\n postgresql\n template1\n template0\n stream\n(5 rows)\n\npostgresql=# select pg_last_wal_receive_lsn();\n pg_last_wal_receive_lsn\n-------------------------\n 0/35000788\n(1 row)\n\npostgresql=#\n\n\n\n\n\n\n\n\n\n\n\nThank you so much for your help on this Satya. I have detailed right below the output of the query you asked me to run.\n \nMaster \npostgresql@<master> ~>psql\npsql (13.5)\nType \"help\" for help.\n \npostgresql=# select * from pg_locks;\n  locktype \n| database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction |\n  pid  \n|      \nmode       | granted | fastpath \n------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+---------+-----------------+---------+----------\n relation\n  |   \n16384 |    \n12141 |      |\n      |           \n|              \n|         \n|       |         \n| 3/6715            \n| 2669949 | AccessShareLock | t      \n| t\n virtualxid |         \n|          \n|      |\n      | 3/6715\n    |              \n|         \n|       |         \n| 3/6715            \n| 2669949 | ExclusiveLock  \n| t       \n| t\n(2 rows)\n \npostgresql=#\n \n \nStandby\npostgresql@<standby> ~>psql\npsql (13.5)\nType \"help\" for help.\n \npostgresql=# select * from pg_locks;\n  locktype  | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction |  pid   |      mode       | granted | fastpath \n------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+--------+-----------------+---------+----------\n relation   |    16384 |    12141 |      |       |            |               |         |       |          | 2/50               | 642064 | AccessShareLock | t       | t\n virtualxid |          |          |      |       | 2/50       |               |         |       |          | 2/50               | 642064 | ExclusiveLock   | t       | t\n virtualxid |          |          |      |       | 1/1        |               |         |       |          | 1/0                |  17333 | ExclusiveLock   | t       | t\n(3 rows)\n \npostgresql=#\n \n \n\n\n\n \n\n\n\n \n\nFrom:\nSATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>\nDate: Monday, January 10, 2022 at 1:06 PM\nTo: Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: [Ext:] Re: Stream Replication not working\n\n[External Email]\n\n\nSeems there is a problem with the replay on your standby. Either it is too slow or stuck behind some locks ( replay_lag of 20:38:47.00904\n indicates this and the flush_lsn is the same as lsn on primary ) . Run pg_locks to see if the replay is stuck behind a lock.\n\n\n \n\n\n \n\n\n \n\n\nOn Mon, Jan 10, 2022 at 11:53 AM Allie Crawford <CrawfordMA@churchofjesuschrist.org> wrote:\n\n\n\n\nHi All,\nI have implemented Stream replication in one of my environments, and for some reason even though all the health checks are showing that the replication\n is working, when I run manual tests to see if changes are being replicated, the changes are not replicated to the standby postgresql environment. I have been researching for two day and I cannot find any documentation that talks about the case I am running\n into. I will appreciate if anybody could take a look at the details I have detailed below and give me some guidance on where the problem might be that is preventing my changes for being replicated. Even though I was able to instantiate the standby while firewalld\n was enabled, I decided to disable it just in case that it was causing any issue to the manual changes, but disabling firewalld has not had any effect, I am still not able to get the manual changes test to be replicated to the standby site. As you will see\n in the details below, the streaming is working, both sites are in sync to the latest WAL but for some reasons the latest changes are not on the standby site. How is it possible that the standby site is completely in sync but yet does not contain the latest\n changes?\n \nThanks in advance for any help you can give me with this problem.\n \nRegards,\nAllie\n \nDetails:\n \nMaster\npostgresql Environment\npostgresql=# select * from pg_stat_replication;\n-[ RECORD 1 ]----+------------------------------\npid             \n| 1979089\nusesysid\n        \n| 16404\nusename         \n| replacct\napplication_name | walreceiver\nclient_addr     \n| <standby server IP>\nclient_hostname \n| <standby server name>\nclient_port     \n| 55096\nbackend_start   \n| 2022-01-06 17:29:51.542784-07\nbackend_xmin\n    \n| \nstate           \n| streaming\nsent_lsn\n        \n| 0/35000788\nwrite_lsn       \n| 0/35000788\nflush_lsn       \n| 0/35000788\nreplay_lsn\n      \n| 0/31000500\nwrite_lag       \n| 00:00:00.001611\nflush_lag       \n| 00:00:00.001693\nreplay_lag\n      \n| 20:38:47.00904\nsync_priority   \n| 1\nsync_state\n      \n| sync\nreply_time\n      \n| 2022-01-07 14:11:58.996277-07\n \npostgresql=#\n \npostgresql=# select * from pg_roles;\n         \nrolname         \n| rolsuper | rolinherit | rolcreaterole | rolcreatedb | rolcanlogin | rolreplication | rolconnlimit | rolpassword | rolvaliduntil | rolbypassrls | rolconfig | \noid  \n---------------------------+----------+------------+---------------+-------------+-------------+----------------+--------------+-------------+---------------+--------------+-----------+-------\n postgresql               \n| t       \n| t         \n| t            \n| t          \n| t          \n| t             \n|          \n-1 | ********   \n|              \n| t           \n|          \n|   \n10\n pg_monitor               \n| f       \n| t         \n| f            \n| f          \n| f          \n| f             \n|          \n-1 | ********   \n|              \n| f           \n|          \n| \n3373\n pg_read_all_settings     \n| f       \n| t         \n| f            \n| f          \n| f          \n| f             \n|          \n-1 | ********   \n|              \n| f           \n|          \n| \n3374\n pg_read_all_stats\n        \n| f       \n| t         \n| f            \n| f          \n| f          \n| f             \n|          \n-1 | ********   \n|              \n| f           \n|          \n| \n3375\n pg_stat_scan_tables\n      \n| f       \n| t         \n| f            \n| f          \n| f          \n| f             \n|          \n-1 | ********   \n|              \n| f           \n|          \n| \n3377\n pg_read_server_files     \n| f       \n| t         \n| f            \n| f          \n| f          \n| f             \n|          \n-1 | ********   \n|              \n| f           \n|          \n| \n4569\n pg_write_server_files\n    \n| f       \n| t         \n| f            \n| f          \n| f          \n| f             \n|          \n-1 | ********   \n|              \n| f           \n|          \n| \n4570\n pg_execute_server_program | f \n       | t         \n| f            \n| f          \n| f          \n| f             \n|          \n-1 | ********   \n|              \n| f           \n|          \n| \n4571\n pg_signal_backend\n        \n| f       \n| t         \n| f            \n| f          \n| f          \n| f             \n|          \n-1 | ********   \n|              \n| f           \n|          \n| \n4200\n replacct                 \n| t       \n| t         \n| t            \n| t          \n| t          \n| t             \n|          \n-1 | ********   \n|              \n| t           \n|          \n| 16404\n(10 rows)\n \npostgresql=#\n \npostgresql=# create database test_replication_3;\nCREATE DATABASE\npostgresql=#\n \npostgresql=# select datname from pg_database;\n     \ndatname       \n--------------------\n postgres\n postgresql\n template1\n template0\n stream\n test_replication\n test_replication_2\n test_replication_3\n(8 rows)\n \npostgresql=#\n \npostgresql=# SELECT pg_current_wal_lsn();\n pg_current_wal_lsn \n--------------------\n 0/35000788\n(1 row)\n \npostgresql=#\n \n\n \nStandby\npostgresql Environment\npostgresql=# select * from pg_stat_wal_receiver;\n-[ RECORD 1 ]---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npid                   | 17340\nstatus                | streaming\nreceive_start_lsn     | 0/30000000\nreceive_start_tli     | 1\nwritten_lsn           | 0/35000788\nflushed_lsn           | 0/35000788\nreceived_tli          | 1\nlast_msg_send_time    | 2022-01-07 14:09:48.766823-07\nlast_msg_receipt_time | 2022-01-07 14:09:48.767581-07\nlatest_end_lsn        | 0/35000788\nlatest_end_time       | 2022-01-07 14:08:48.663693-07\nslot_name             | wal_req_x_replica\nsender_host           | <Master Server IP>\nsender_port           | <Master server postgresql port#>\nconninfo              | user=replacct password=******** channel_binding=prefer dbname=replication host=<Master server\n IP> port=<postgresql port#> fallback_application_name=walreceiver sslmode=prefer sslcompression=0 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres target_session_attrs=any\n \npostgresql=#\n \npostgresql=# select datname from pg_database;\n  datname   \n------------\n postgres\n postgresql\n template1\n template0\n stream\n(5 rows)\n \npostgresql=# select pg_last_wal_receive_lsn();\n pg_last_wal_receive_lsn \n-------------------------\n 0/35000788\n(1 row)\n \npostgresql=#", "msg_date": "Mon, 10 Jan 2022 20:42:14 +0000", "msg_from": "Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>", "msg_from_op": true, "msg_subject": "Re: [Ext:] Re: Stream Replication not working" }, { "msg_contents": "Hi All,\n\nfor us also, logs are applying at slave server but very very slow. While\nchecking we also have seen same set of locks to Master and Slave servers.\nPlease suggest the solution for that.\nMany Thanks in Advance !!\nThanks\n\nOn Tue, Jan 11, 2022 at 2:12 AM Allie Crawford <\nCrawfordMA@churchofjesuschrist.org> wrote:\n\n> Thank you so much for your help on this Satya. I have detailed right below\n> the output of the query you asked me to run.\n>\n>\n>\n> *Master *\n>\n> postgresql@<master> ~>psql\n>\n> psql (13.5)\n>\n> Type \"help\" for help.\n>\n>\n>\n> postgresql=# select * from pg_locks;\n>\n> locktype | database | relation | page | tuple | virtualxid |\n> transactionid | classid | objid | objsubid | virtualtransaction | pid\n> | mode | granted | fastpath\n>\n>\n> ------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+---------+-----------------+---------+----------\n>\n> relation | 16384 | 12141 | | | |\n> | | | | 3/6715 | 2669949 |\n> AccessShareLock | t | t\n>\n> virtualxid | | | | | 3/6715 |\n> | | | | 3/6715 | 2669949 |\n> ExclusiveLock | t | t\n>\n> (2 rows)\n>\n>\n>\n> postgresql=#\n>\n>\n>\n>\n>\n> *Standby*\n>\n> postgresql@<standby> ~>psql\n>\n> psql (13.5)\n>\n> Type \"help\" for help.\n>\n>\n>\n> postgresql=# select * from pg_locks;\n>\n> locktype | database | relation | page | tuple | virtualxid |\n> transactionid | classid | objid | objsubid | virtualtransaction | pid |\n> mode | granted | fastpath\n>\n>\n> ------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+--------+-----------------+---------+----------\n>\n> relation | 16384 | 12141 | | | |\n> | | | | 2/50 | 642064 |\n> AccessShareLock | t | t\n>\n> virtualxid | | | | | 2/50 |\n> | | | | 2/50 | 642064 |\n> ExclusiveLock | t | t\n>\n> virtualxid | | | | | 1/1 |\n> | | | | 1/0 | 17333 |\n> ExclusiveLock | t | t\n>\n> (3 rows)\n>\n>\n>\n> postgresql=#\n>\n>\n>\n>\n>\n>\n>\n>\n>\n> *From: *SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>\n> *Date: *Monday, January 10, 2022 at 1:06 PM\n> *To: *Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>\n> *Cc: *pgsql-hackers@lists.postgresql.org <\n> pgsql-hackers@lists.postgresql.org>\n> *Subject: *[Ext:] Re: Stream Replication not working\n> [External Email]\n>\n> Seems there is a problem with the replay on your standby. Either it is too\n> slow or stuck behind some locks ( replay_lag of 20:38:47.00904 indicates\n> this and the flush_lsn is the same as lsn on primary ) . Run pg_locks to\n> see if the replay is stuck behind a lock.\n>\n>\n>\n>\n>\n>\n>\n> On Mon, Jan 10, 2022 at 11:53 AM Allie Crawford <\n> CrawfordMA@churchofjesuschrist.org> wrote:\n>\n> Hi All,\n>\n> I have implemented Stream replication in one of my environments, and for\n> some reason even though all the health checks are showing that the\n> replication is working, when I run manual tests to see if changes are being\n> replicated, the changes are not replicated to the standby postgresql\n> environment. I have been researching for two day and I cannot find any\n> documentation that talks about the case I am running into. I will\n> appreciate if anybody could take a look at the details I have detailed\n> below and give me some guidance on where the problem might be that is\n> preventing my changes for being replicated. Even though I was able to\n> instantiate the standby while firewalld was enabled, I decided to disable\n> it just in case that it was causing any issue to the manual changes, but\n> disabling firewalld has not had any effect, I am still not able to get the\n> manual changes test to be replicated to the standby site. As you will see\n> in the details below, the streaming is working, both sites are in sync to\n> the latest WAL but for some reasons the latest changes are not on the\n> standby site. How is it possible that the standby site is completely in\n> sync but yet does not contain the latest changes?\n>\n>\n>\n> Thanks in advance for any help you can give me with this problem.\n>\n>\n>\n> Regards,\n>\n> Allie\n>\n>\n>\n> *Details:*\n>\n>\n>\n> *Master **postgresql Environment*\n>\n> postgresql=# select * from pg_stat_replication;\n>\n> -[ RECORD 1 ]----+------------------------------\n>\n> pid | 1979089\n>\n> usesysid | 16404\n>\n> usename | replacct\n>\n> application_name | walreceiver\n>\n> client_addr | <standby server IP>\n>\n> client_hostname | <standby server name>\n>\n> client_port | 55096\n>\n> backend_start | 2022-01-06 17:29:51.542784-07\n>\n> backend_xmin |\n>\n> state | streaming\n>\n> sent_lsn | 0/35000788\n>\n> write_lsn | 0/35000788\n>\n> flush_lsn | 0/35000788\n>\n> replay_lsn | 0/31000500\n>\n> write_lag | 00:00:00.001611\n>\n> flush_lag | 00:00:00.001693\n>\n> replay_lag | 20:38:47.00904\n>\n> sync_priority | 1\n>\n> sync_state | sync\n>\n> reply_time | 2022-01-07 14:11:58.996277-07\n>\n>\n>\n> postgresql=#\n>\n>\n>\n> postgresql=# select * from pg_roles;\n>\n> rolname | rolsuper | rolinherit | rolcreaterole |\n> rolcreatedb | rolcanlogin | rolreplication | rolconnlimit | rolpassword |\n> rolvaliduntil | rolbypassrls | rolconfig | oid\n>\n>\n> ---------------------------+----------+------------+---------------+-------------+-------------+----------------+--------------+-------------+---------------+--------------+-----------+-------\n>\n> postgresql | t | t | t | t\n> | t | t | -1 | ******** |\n> | t | | 10\n>\n> pg_monitor | f | t | f | f\n> | f | f | -1 | ******** |\n> | f | | 3373\n>\n> pg_read_all_settings | f | t | f | f\n> | f | f | -1 | ******** |\n> | f | | 3374\n>\n> pg_read_all_stats | f | t | f | f\n> | f | f | -1 | ******** |\n> | f | | 3375\n>\n> pg_stat_scan_tables | f | t | f | f\n> | f | f | -1 | ******** |\n> | f | | 3377\n>\n> pg_read_server_files | f | t | f | f\n> | f | f | -1 | ******** |\n> | f | | 4569\n>\n> pg_write_server_files | f | t | f | f\n> | f | f | -1 | ******** |\n> | f | | 4570\n>\n> pg_execute_server_program | f | t | f | f\n> | f | f | -1 | ******** |\n> | f | | 4571\n>\n> pg_signal_backend | f | t | f | f\n> | f | f | -1 | ******** |\n> | f | | 4200\n>\n> replacct | t | t | t | t\n> | t | t | -1 | ******** |\n> | t | | 16404\n>\n> (10 rows)\n>\n>\n>\n> postgresql=#\n>\n>\n>\n> postgresql=# create database test_replication_3;\n>\n> CREATE DATABASE\n>\n> postgresql=#\n>\n>\n>\n> postgresql=# select datname from pg_database;\n>\n> datname\n>\n> --------------------\n>\n> postgres\n>\n> postgresql\n>\n> template1\n>\n> template0\n>\n> stream\n>\n> test_replication\n>\n> test_replication_2\n>\n> test_replication_3\n>\n> (8 rows)\n>\n>\n>\n> postgresql=#\n>\n>\n>\n> postgresql=# SELECT pg_current_wal_lsn();\n>\n> pg_current_wal_lsn\n>\n> --------------------\n>\n> 0/35000788\n>\n> (1 row)\n>\n>\n>\n> postgresql=#\n>\n>\n>\n>\n>\n> *Standby **postgresql Environment*\n>\n> postgresql=# select * from pg_stat_wal_receiver;\n>\n> -[ RECORD 1\n> ]---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> pid | 17340\n>\n> status | streaming\n>\n> receive_start_lsn | 0/30000000\n>\n> receive_start_tli | 1\n>\n> written_lsn | 0/35000788\n>\n> flushed_lsn | 0/35000788\n>\n> received_tli | 1\n>\n> last_msg_send_time | 2022-01-07 14:09:48.766823-07\n>\n> last_msg_receipt_time | 2022-01-07 14:09:48.767581-07\n>\n> latest_end_lsn | 0/35000788\n>\n> latest_end_time | 2022-01-07 14:08:48.663693-07\n>\n> slot_name | wal_req_x_replica\n>\n> sender_host | <Master Server IP>\n>\n> sender_port | <Master server postgresql port#>\n>\n> conninfo | user=replacct password=********\n> channel_binding=prefer dbname=replication host=<Master server IP>\n> port=<postgresql port#> fallback_application_name=walreceiver\n> sslmode=prefer sslcompression=0 ssl_min_protocol_version=TLSv1.2\n> gssencmode=prefer krbsrvname=postgres target_session_attrs=any\n>\n>\n>\n> postgresql=#\n>\n>\n>\n> postgresql=# select datname from pg_database;\n>\n> datname\n>\n> ------------\n>\n> postgres\n>\n> postgresql\n>\n> template1\n>\n> template0\n>\n> stream\n>\n> (5 rows)\n>\n>\n>\n> postgresql=# select pg_last_wal_receive_lsn();\n>\n> pg_last_wal_receive_lsn\n>\n> -------------------------\n>\n> 0/35000788\n>\n> (1 row)\n>\n>\n>\n> postgresql=#\n>\n>\n>\n>\n>\n>\n\nHi All,for us also, logs are applying at slave server but very very slow. While checking we also have seen same set of locks to Master and Slave servers.Please suggest the solution for that.Many Thanks in Advance !!ThanksOn Tue, Jan 11, 2022 at 2:12 AM Allie Crawford <CrawfordMA@churchofjesuschrist.org> wrote:\n\n\nThank you so much for your help on this Satya. I have detailed right below the output of the query you asked me to run.\n \nMaster \npostgresql@<master> ~>psql\npsql (13.5)\nType \"help\" for help.\n \npostgresql=# select * from pg_locks;\n  locktype \n| database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction |\n  pid  \n|      \nmode       | granted | fastpath \n------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+---------+-----------------+---------+----------\n relation\n  |   \n16384 |    \n12141 |      |\n      |           \n|              \n|         \n|       |         \n| 3/6715            \n| 2669949 | AccessShareLock | t      \n| t\n virtualxid |         \n|          \n|      |\n      | 3/6715\n    |              \n|         \n|       |         \n| 3/6715            \n| 2669949 | ExclusiveLock  \n| t       \n| t\n(2 rows)\n \npostgresql=#\n \n \nStandby\npostgresql@<standby> ~>psql\npsql (13.5)\nType \"help\" for help.\n \npostgresql=# select * from pg_locks;\n  locktype  | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction |  pid   |      mode       | granted | fastpath \n------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+--------+-----------------+---------+----------\n relation   |    16384 |    12141 |      |       |            |               |         |       |          | 2/50               | 642064 | AccessShareLock | t       | t\n virtualxid |          |          |      |       | 2/50       |               |         |       |          | 2/50               | 642064 | ExclusiveLock   | t       | t\n virtualxid |          |          |      |       | 1/1        |               |         |       |          | 1/0                |  17333 | ExclusiveLock   | t       | t\n(3 rows)\n \npostgresql=#\n \n \n\n\n\n \n\n\n\n \n\nFrom:\nSATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>\nDate: Monday, January 10, 2022 at 1:06 PM\nTo: Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: [Ext:] Re: Stream Replication not working\n\n[External Email]\n\n\nSeems there is a problem with the replay on your standby. Either it is too slow or stuck behind some locks ( replay_lag of 20:38:47.00904\n indicates this and the flush_lsn is the same as lsn on primary ) . Run pg_locks to see if the replay is stuck behind a lock.\n\n\n \n\n\n \n\n\n \n\n\nOn Mon, Jan 10, 2022 at 11:53 AM Allie Crawford <CrawfordMA@churchofjesuschrist.org> wrote:\n\n\n\n\nHi All,\nI have implemented Stream replication in one of my environments, and for some reason even though all the health checks are showing that the replication\n is working, when I run manual tests to see if changes are being replicated, the changes are not replicated to the standby postgresql environment. I have been researching for two day and I cannot find any documentation that talks about the case I am running\n into. I will appreciate if anybody could take a look at the details I have detailed below and give me some guidance on where the problem might be that is preventing my changes for being replicated. Even though I was able to instantiate the standby while firewalld\n was enabled, I decided to disable it just in case that it was causing any issue to the manual changes, but disabling firewalld has not had any effect, I am still not able to get the manual changes test to be replicated to the standby site. As you will see\n in the details below, the streaming is working, both sites are in sync to the latest WAL but for some reasons the latest changes are not on the standby site. How is it possible that the standby site is completely in sync but yet does not contain the latest\n changes?\n \nThanks in advance for any help you can give me with this problem.\n \nRegards,\nAllie\n \nDetails:\n \nMaster\npostgresql Environment\npostgresql=# select * from pg_stat_replication;\n-[ RECORD 1 ]----+------------------------------\npid             \n| 1979089\nusesysid\n        \n| 16404\nusename         \n| replacct\napplication_name | walreceiver\nclient_addr     \n| <standby server IP>\nclient_hostname \n| <standby server name>\nclient_port     \n| 55096\nbackend_start   \n| 2022-01-06 17:29:51.542784-07\nbackend_xmin\n    \n| \nstate           \n| streaming\nsent_lsn\n        \n| 0/35000788\nwrite_lsn       \n| 0/35000788\nflush_lsn       \n| 0/35000788\nreplay_lsn\n      \n| 0/31000500\nwrite_lag       \n| 00:00:00.001611\nflush_lag       \n| 00:00:00.001693\nreplay_lag\n      \n| 20:38:47.00904\nsync_priority   \n| 1\nsync_state\n      \n| sync\nreply_time\n      \n| 2022-01-07 14:11:58.996277-07\n \npostgresql=#\n \npostgresql=# select * from pg_roles;\n         \nrolname         \n| rolsuper | rolinherit | rolcreaterole | rolcreatedb | rolcanlogin | rolreplication | rolconnlimit | rolpassword | rolvaliduntil | rolbypassrls | rolconfig | \noid  \n---------------------------+----------+------------+---------------+-------------+-------------+----------------+--------------+-------------+---------------+--------------+-----------+-------\n postgresql               \n| t       \n| t         \n| t            \n| t          \n| t          \n| t             \n|          \n-1 | ********   \n|              \n| t           \n|          \n|   \n10\n pg_monitor               \n| f       \n| t         \n| f            \n| f          \n| f          \n| f             \n|          \n-1 | ********   \n|              \n| f           \n|          \n| \n3373\n pg_read_all_settings     \n| f       \n| t         \n| f            \n| f          \n| f          \n| f             \n|          \n-1 | ********   \n|              \n| f           \n|          \n| \n3374\n pg_read_all_stats\n        \n| f       \n| t         \n| f            \n| f          \n| f          \n| f             \n|          \n-1 | ********   \n|              \n| f           \n|          \n| \n3375\n pg_stat_scan_tables\n      \n| f       \n| t         \n| f            \n| f          \n| f          \n| f             \n|          \n-1 | ********   \n|              \n| f           \n|          \n| \n3377\n pg_read_server_files     \n| f       \n| t         \n| f            \n| f          \n| f          \n| f             \n|          \n-1 | ********   \n|              \n| f           \n|          \n| \n4569\n pg_write_server_files\n    \n| f       \n| t         \n| f            \n| f          \n| f          \n| f             \n|          \n-1 | ********   \n|              \n| f           \n|          \n| \n4570\n pg_execute_server_program | f \n       | t         \n| f            \n| f          \n| f          \n| f             \n|          \n-1 | ********   \n|              \n| f           \n|          \n| \n4571\n pg_signal_backend\n        \n| f       \n| t         \n| f            \n| f          \n| f          \n| f             \n|          \n-1 | ********   \n|              \n| f           \n|          \n| \n4200\n replacct                 \n| t       \n| t         \n| t            \n| t          \n| t          \n| t             \n|          \n-1 | ********   \n|              \n| t           \n|          \n| 16404\n(10 rows)\n \npostgresql=#\n \npostgresql=# create database test_replication_3;\nCREATE DATABASE\npostgresql=#\n \npostgresql=# select datname from pg_database;\n     \ndatname       \n--------------------\n postgres\n postgresql\n template1\n template0\n stream\n test_replication\n test_replication_2\n test_replication_3\n(8 rows)\n \npostgresql=#\n \npostgresql=# SELECT pg_current_wal_lsn();\n pg_current_wal_lsn \n--------------------\n 0/35000788\n(1 row)\n \npostgresql=#\n \n\n \nStandby\npostgresql Environment\npostgresql=# select * from pg_stat_wal_receiver;\n-[ RECORD 1 ]---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npid                   | 17340\nstatus                | streaming\nreceive_start_lsn     | 0/30000000\nreceive_start_tli     | 1\nwritten_lsn           | 0/35000788\nflushed_lsn           | 0/35000788\nreceived_tli          | 1\nlast_msg_send_time    | 2022-01-07 14:09:48.766823-07\nlast_msg_receipt_time | 2022-01-07 14:09:48.767581-07\nlatest_end_lsn        | 0/35000788\nlatest_end_time       | 2022-01-07 14:08:48.663693-07\nslot_name             | wal_req_x_replica\nsender_host           | <Master Server IP>\nsender_port           | <Master server postgresql port#>\nconninfo              | user=replacct password=******** channel_binding=prefer dbname=replication host=<Master server\n IP> port=<postgresql port#> fallback_application_name=walreceiver sslmode=prefer sslcompression=0 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres target_session_attrs=any\n \npostgresql=#\n \npostgresql=# select datname from pg_database;\n  datname   \n------------\n postgres\n postgresql\n template1\n template0\n stream\n(5 rows)\n \npostgresql=# select pg_last_wal_receive_lsn();\n pg_last_wal_receive_lsn \n-------------------------\n 0/35000788\n(1 row)\n \npostgresql=#", "msg_date": "Tue, 11 Jan 2022 13:19:10 +0530", "msg_from": "Sushant Postgres <sushant.postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Ext:] Re: Stream Replication not working" }, { "msg_contents": "On Tue, Jan 11, 2022 at 2:12 AM Allie Crawford\n<CrawfordMA@churchofjesuschrist.org> wrote:\n>\n> Thank you so much for your help on this Satya. I have detailed right below the output of the query you asked me to run.\n>\n>\n>\n> Master\n>\n> postgresql@<master> ~>psql\n>\n> psql (13.5)\n>\n> Type \"help\" for help.\n>\n>\n>\n> postgresql=# select * from pg_locks;\n>\n> locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath\n>\n> ------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+---------+-----------------+---------+----------\n>\n> relation | 16384 | 12141 | | | | | | | | 3/6715 | 2669949 | AccessShareLock | t | t\n>\n> virtualxid | | | | | 3/6715 | | | | | 3/6715 | 2669949 | ExclusiveLock | t | t\n>\n> (2 rows)\n>\n>\n>\n> postgresql=#\n>\n>\n>\n>\n>\n> Standby\n>\n> postgresql@<standby> ~>psql\n>\n> psql (13.5)\n>\n> Type \"help\" for help.\n>\n>\n>\n> postgresql=# select * from pg_locks;\n>\n> locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath\n>\n> ------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+--------+-----------------+---------+----------\n>\n> relation | 16384 | 12141 | | | | | | | | 2/50 | 642064 | AccessShareLock | t | t\n>\n> virtualxid | | | | | 2/50 | | | | | 2/50 | 642064 | ExclusiveLock | t | t\n>\n> virtualxid | | | | | 1/1 | | | | | 1/0 | 17333 | ExclusiveLock | t | t\n>\n> (3 rows)\n>\n\nIt seems both master and standby have an exclusive lock on db:16384\nand relation:12141. Which is this database/relation and why is the\napp/database holding a lock on it?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 11 Jan 2022 18:58:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Ext:] Re: Stream Replication not working" }, { "msg_contents": "Satya,\nI am a newbie on postgresql, I have no previous experience with postgresql and I need to get this replication working. In looking at the data that the pg_lock is showing, I do not know how to interpret it.\nI will really appreciate any help you can give me in resolving this issue.\nRegards,\nAllie\n\nFrom: Sushant Postgres <sushant.postgres@gmail.com>\nDate: Tuesday, January 11, 2022 at 12:49 AM\nTo: Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>\nCc: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>, pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: [Ext:] Re: Stream Replication not working\nHi All,\n\nfor us also, logs are applying at slave server but very very slow. While checking we also have seen same set of locks to Master and Slave servers.\nPlease suggest the solution for that.\nMany Thanks in Advance !!\nThanks\n\nOn Tue, Jan 11, 2022 at 2:12 AM Allie Crawford <CrawfordMA@churchofjesuschrist.org<mailto:CrawfordMA@churchofjesuschrist.org>> wrote:\nThank you so much for your help on this Satya. I have detailed right below the output of the query you asked me to run.\n\nMaster\n\npostgresql@<master> ~>psql\n\npsql (13.5)\n\nType \"help\" for help.\n\n\n\npostgresql=# select * from pg_locks;\n\n locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath\n\n------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+---------+-----------------+---------+----------\n\n relation | 16384 | 12141 | | | | | | | | 3/6715 | 2669949 | AccessShareLock | t | t\n\n virtualxid | | | | | 3/6715 | | | | | 3/6715 | 2669949 | ExclusiveLock | t | t\n\n(2 rows)\n\n\n\npostgresql=#\n\n\nStandby\npostgresql@<standby> ~>psql\npsql (13.5)\nType \"help\" for help.\n\npostgresql=# select * from pg_locks;\n locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath\n------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+--------+-----------------+---------+----------\n relation | 16384 | 12141 | | | | | | | | 2/50 | 642064 | AccessShareLock | t | t\n virtualxid | | | | | 2/50 | | | | | 2/50 | 642064 | ExclusiveLock | t | t\n virtualxid | | | | | 1/1 | | | | | 1/0 | 17333 | ExclusiveLock | t | t\n(3 rows)\n\npostgresql=#\n\n\n\n\nFrom: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com<mailto:satyanarlapuram@gmail.com>>\nDate: Monday, January 10, 2022 at 1:06 PM\nTo: Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>\nCc: pgsql-hackers@lists.postgresql.org<mailto:pgsql-hackers@lists.postgresql.org> <pgsql-hackers@lists.postgresql.org<mailto:pgsql-hackers@lists.postgresql.org>>\nSubject: [Ext:] Re: Stream Replication not working\n[External Email]\nSeems there is a problem with the replay on your standby. Either it is too slow or stuck behind some locks ( replay_lag of 20:38:47.00904 indicates this and the flush_lsn is the same as lsn on primary ) . Run pg_locks to see if the replay is stuck behind a lock.\n\n\n\nOn Mon, Jan 10, 2022 at 11:53 AM Allie Crawford <CrawfordMA@churchofjesuschrist.org<mailto:CrawfordMA@churchofjesuschrist.org>> wrote:\nHi All,\nI have implemented Stream replication in one of my environments, and for some reason even though all the health checks are showing that the replication is working, when I run manual tests to see if changes are being replicated, the changes are not replicated to the standby postgresql environment. I have been researching for two day and I cannot find any documentation that talks about the case I am running into. I will appreciate if anybody could take a look at the details I have detailed below and give me some guidance on where the problem might be that is preventing my changes for being replicated. Even though I was able to instantiate the standby while firewalld was enabled, I decided to disable it just in case that it was causing any issue to the manual changes, but disabling firewalld has not had any effect, I am still not able to get the manual changes test to be replicated to the standby site. As you will see in the details below, the streaming is working, both sites are in sync to the latest WAL but for some reasons the latest changes are not on the standby site. How is it possible that the standby site is completely in sync but yet does not contain the latest changes?\n\nThanks in advance for any help you can give me with this problem.\n\nRegards,\nAllie\n\nDetails:\n\nMaster postgresql Environment\n\npostgresql=# select * from pg_stat_replication;\n\n-[ RECORD 1 ]----+------------------------------\n\npid | 1979089\n\nusesysid | 16404\n\nusename | replacct\n\napplication_name | walreceiver\n\nclient_addr | <standby server IP>\n\nclient_hostname | <standby server name>\n\nclient_port | 55096\n\nbackend_start | 2022-01-06 17:29:51.542784-07\n\nbackend_xmin |\n\nstate | streaming\n\nsent_lsn | 0/35000788\n\nwrite_lsn | 0/35000788\n\nflush_lsn | 0/35000788\n\nreplay_lsn | 0/31000500\n\nwrite_lag | 00:00:00.001611\n\nflush_lag | 00:00:00.001693\n\nreplay_lag | 20:38:47.00904\n\nsync_priority | 1\n\nsync_state | sync\n\nreply_time | 2022-01-07 14:11:58.996277-07\n\n\n\npostgresql=#\n\n\npostgresql=# select * from pg_roles;\n\n rolname | rolsuper | rolinherit | rolcreaterole | rolcreatedb | rolcanlogin | rolreplication | rolconnlimit | rolpassword | rolvaliduntil | rolbypassrls | rolconfig | oid\n\n---------------------------+----------+------------+---------------+-------------+-------------+----------------+--------------+-------------+---------------+--------------+-----------+-------\n\n postgresql | t | t | t | t | t | t | -1 | ******** | | t | | 10\n\n pg_monitor | f | t | f | f | f | f | -1 | ******** | | f | | 3373\n\n pg_read_all_settings | f | t | f | f | f | f | -1 | ******** | | f | | 3374\n\n pg_read_all_stats | f | t | f | f | f | f | -1 | ******** | | f | | 3375\n\n pg_stat_scan_tables | f | t | f | f | f | f | -1 | ******** | | f | | 3377\n\n pg_read_server_files | f | t | f | f | f | f | -1 | ******** | | f | | 4569\n\n pg_write_server_files | f | t | f | f | f | f | -1 | ******** | | f | | 4570\n\n pg_execute_server_program | f | t | f | f | f | f | -1 | ******** | | f | | 4571\n\n pg_signal_backend | f | t | f | f | f | f | -1 | ******** | | f | | 4200\n\n replacct | t | t | t | t | t | t | -1 | ******** | | t | | 16404\n\n(10 rows)\n\n\n\npostgresql=#\n\n\npostgresql=# create database test_replication_3;\n\nCREATE DATABASE\n\npostgresql=#\n\n\n\npostgresql=# select datname from pg_database;\n\n datname\n\n--------------------\n\n postgres\n\n postgresql\n\n template1\n\n template0\n\n stream\n\n test_replication\n\n test_replication_2\n\n test_replication_3\n\n(8 rows)\n\n\n\npostgresql=#\n\n\n\npostgresql=# SELECT pg_current_wal_lsn();\n\n pg_current_wal_lsn\n\n--------------------\n\n 0/35000788\n\n(1 row)\n\n\n\npostgresql=#\n\n\nStandby postgresql Environment\npostgresql=# select * from pg_stat_wal_receiver;\n-[ RECORD 1 ]---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npid | 17340\nstatus | streaming\nreceive_start_lsn | 0/30000000\nreceive_start_tli | 1\nwritten_lsn | 0/35000788\nflushed_lsn | 0/35000788\nreceived_tli | 1\nlast_msg_send_time | 2022-01-07 14:09:48.766823-07\nlast_msg_receipt_time | 2022-01-07 14:09:48.767581-07\nlatest_end_lsn | 0/35000788\nlatest_end_time | 2022-01-07 14:08:48.663693-07\nslot_name | wal_req_x_replica\nsender_host | <Master Server IP>\nsender_port | <Master server postgresql port#>\nconninfo | user=replacct password=******** channel_binding=prefer dbname=replication host=<Master server IP> port=<postgresql port#> fallback_application_name=walreceiver sslmode=prefer sslcompression=0 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres target_session_attrs=any\n\npostgresql=#\n\npostgresql=# select datname from pg_database;\n datname\n------------\n postgres\n postgresql\n template1\n template0\n stream\n(5 rows)\n\npostgresql=# select pg_last_wal_receive_lsn();\n pg_last_wal_receive_lsn\n-------------------------\n 0/35000788\n(1 row)\n\npostgresql=#\n\n\n\n\n\n\n\n\n\n\n\nSatya,\nI am a newbie on postgresql, I have no previous experience with postgresql and I need to get this replication working. In looking at the data that the pg_lock is showing, I do not know how to interpret it.\nI will really appreciate any help you can give me in resolving this issue.\nRegards,\nAllie\n \n\nFrom:\nSushant Postgres <sushant.postgres@gmail.com>\nDate: Tuesday, January 11, 2022 at 12:49 AM\nTo: Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>\nCc: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>, pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: [Ext:] Re: Stream Replication not working\n\n\nHi All,\n\n \n\n\nfor us also, logs are applying at slave server but very very slow. While checking we also have seen same set of locks to Master and Slave servers.\n\n\nPlease suggest the solution for that.\n\n\nMany Thanks in Advance !!\n\n\nThanks\n\n\n \n\n\nOn Tue, Jan 11, 2022 at 2:12 AM Allie Crawford <CrawfordMA@churchofjesuschrist.org> wrote:\n\n\n\n\nThank you so much for your help on this Satya. I have detailed right below the output of the query you asked me to run.\n \nMaster\n\npostgresql@<master> ~>psql\npsql (13.5)\nType \"help\" for help.\n \npostgresql=#\nselect * from pg_locks;\n \nlocktype \n| database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction |\n  pid\n  |     \nmode      \n| granted | fastpath \n------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+---------+-----------------+---------+----------\n relation\n  |   \n16384 |   \n12141 |     \n|      \n|           \n|              \n|        \n|      \n|         \n| 3/6715            \n| 2669949 | AccessShareLock | t\n      \n| t\n virtualxid |         \n|         \n|     \n|      \n| 3/6715    \n|              \n|        \n|      \n|         \n| 3/6715            \n| 2669949 | ExclusiveLock \n  | t\n      \n| t\n(2 rows)\n \npostgresql=#\n \n \nStandby\npostgresql@<standby> ~>psql\npsql (13.5)\nType \"help\" for help.\n \npostgresql=# select * from pg_locks;\n  locktype  | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction\n |  pid   |      mode       | granted | fastpath \n------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+--------+-----------------+---------+----------\n relation   |    16384 |    12141 |      |       |            |               |         |       |          | 2/50  \n             | 642064 | AccessShareLock | t       | t\n virtualxid |          |          |      |       | 2/50       |               |         |       |          | 2/50  \n             | 642064 | ExclusiveLock   | t       | t\n virtualxid |          |          |      |       | 1/1        |               |         |       |          | 1/0   \n             |  17333 | ExclusiveLock   | t       | t\n(3 rows)\n \npostgresql=#\n \n \n\n\n\n \n\n\n\n \n\nFrom:\nSATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>\nDate: Monday, January 10, 2022 at 1:06 PM\nTo: Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: [Ext:] Re: Stream Replication not working\n\n[External Email]\n\n\nSeems there is a problem with the replay on your standby. Either it is too slow or stuck behind some locks ( replay_lag of 20:38:47.00904\n indicates this and the flush_lsn is the same as lsn on primary ) . Run pg_locks to see if the replay is stuck behind a lock.\n\n\n \n\n\n \n\n\n \n\n\nOn Mon, Jan 10, 2022 at 11:53 AM Allie Crawford <CrawfordMA@churchofjesuschrist.org>\n wrote:\n\n\n\n\nHi All,\nI have implemented Stream replication in one of my environments, and for some reason even though all the health checks are showing that the replication\n is working, when I run manual tests to see if changes are being replicated, the changes are not replicated to the standby postgresql environment. I have been researching for two day and I cannot find any documentation that talks about the case I am running\n into. I will appreciate if anybody could take a look at the details I have detailed below and give me some guidance on where the problem might be that is preventing my changes for being replicated. Even though I was able to instantiate the standby while firewalld\n was enabled, I decided to disable it just in case that it was causing any issue to the manual changes, but disabling firewalld has not had any effect, I am still not able to get the manual changes test to be replicated to the standby site. As you will see\n in the details below, the streaming is working, both sites are in sync to the latest WAL but for some reasons the latest changes are not on the standby site. How is it possible that the standby site is completely in sync but yet does not contain the latest\n changes?\n \nThanks in advance for any help you can give me with this problem.\n \nRegards,\nAllie\n \nDetails:\n \nMaster\npostgresql Environment\npostgresql=# select * from pg_stat_replication;\n-[ RECORD 1 ]----+------------------------------\npid             \n| 1979089\nusesysid\n       \n| 16404\nusename         \n| replacct\napplication_name | walreceiver\nclient_addr     \n| <standby server IP>\nclient_hostname \n| <standby server name>\nclient_port     \n| 55096\nbackend_start   \n| 2022-01-06 17:29:51.542784-07\nbackend_xmin\n   \n| \nstate           \n| streaming\nsent_lsn\n       \n| 0/35000788\nwrite_lsn       \n| 0/35000788\nflush_lsn       \n| 0/35000788\nreplay_lsn\n     \n| 0/31000500\nwrite_lag       \n| 00:00:00.001611\nflush_lag       \n| 00:00:00.001693\nreplay_lag\n     \n| 20:38:47.00904\nsync_priority   \n| 1\nsync_state\n     \n| sync\nreply_time\n     \n| 2022-01-07 14:11:58.996277-07\n \npostgresql=#\n \npostgresql=# select * from pg_roles;\n         \nrolname         \n| rolsuper | rolinherit | rolcreaterole | rolcreatedb | rolcanlogin | rolreplication | rolconnlimit | rolpassword | rolvaliduntil | rolbypassrls | rolconfig | \noid \n \n---------------------------+----------+------------+---------------+-------------+-------------+----------------+--------------+-------------+---------------+--------------+-----------+-------\n postgresql \n               | t       \n| t         \n| t \n           \n| t \n         \n| t \n         \n| t             \n| \n         \n-1 | ********   \n| \n             \n| t           \n| \n         \n|   \n10\n pg_monitor \n               | f       \n| t         \n| f \n           \n| f \n         \n| f \n         \n| f             \n| \n         \n-1 | ********   \n| \n             \n| f           \n| \n         \n| \n3373\n pg_read_all_settings \n     | f       \n| t         \n| f \n           \n| f \n         \n| f \n         \n| f             \n| \n         \n-1 | ********   \n| \n             \n| f           \n| \n         \n| \n3374\n pg_read_all_stats\n       \n| f       \n| t         \n| f \n           \n| f \n         \n| f \n         \n| f             \n| \n         \n-1 | ********   \n| \n             \n| f           \n| \n         \n| \n3375\n pg_stat_scan_tables\n     \n| f       \n| t         \n| f \n           \n| f \n         \n| f \n         \n| f             \n| \n         \n-1 | ********   \n| \n             \n| f           \n| \n         \n| \n3377\n pg_read_server_files \n     | f       \n| t         \n| f \n           \n| f \n         \n| f \n         \n| f             \n| \n         \n-1 | ********   \n| \n             \n| f           \n| \n         \n| \n4569\n pg_write_server_files\n   \n| f       \n| t         \n| f \n           \n| f \n         \n| f \n         \n| f             \n| \n         \n-1 | ********   \n| \n             \n| f           \n| \n         \n| \n4570\n pg_execute_server_program\n | f       \n| t         \n| f \n           \n| f \n         \n| f \n         \n| f             \n| \n         \n-1 | ********   \n| \n             \n| f           \n| \n         \n| \n4571\n pg_signal_backend\n       \n| f       \n| t         \n| f \n           \n| f \n         \n| f \n         \n| f             \n| \n         \n-1 | ********   \n| \n             \n| f           \n| \n         \n| \n4200\n replacct         \n         | t       \n| t         \n| t \n           \n| t \n         \n| t \n         \n| t             \n| \n         \n-1 | ********   \n| \n             \n| t           \n| \n         \n| 16404\n(10 rows)\n \npostgresql=#\n \npostgresql=# create database test_replication_3;\nCREATE DATABASE\npostgresql=#\n \npostgresql=# select datname from pg_database;\n     \ndatname       \n--------------------\n postgres\n postgresql\n template1\n template0\n stream\n test_replication\n test_replication_2\n test_replication_3\n(8 rows)\n \npostgresql=#\n \npostgresql=# SELECT pg_current_wal_lsn();\n pg_current_wal_lsn \n--------------------\n 0/35000788\n(1 row)\n \npostgresql=#\n \n\n \nStandby\npostgresql Environment\npostgresql=# select * from pg_stat_wal_receiver;\n-[ RECORD 1 ]---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npid                   | 17340\nstatus                | streaming\nreceive_start_lsn     | 0/30000000\nreceive_start_tli     | 1\nwritten_lsn           | 0/35000788\nflushed_lsn           | 0/35000788\nreceived_tli          | 1\nlast_msg_send_time    | 2022-01-07 14:09:48.766823-07\nlast_msg_receipt_time | 2022-01-07 14:09:48.767581-07\nlatest_end_lsn        | 0/35000788\nlatest_end_time       | 2022-01-07 14:08:48.663693-07\nslot_name             | wal_req_x_replica\nsender_host           | <Master Server IP>\nsender_port           | <Master server postgresql port#>\nconninfo              | user=replacct password=******** channel_binding=prefer dbname=replication host=<Master server\n IP> port=<postgresql port#> fallback_application_name=walreceiver sslmode=prefer sslcompression=0 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres target_session_attrs=any\n \npostgresql=#\n \npostgresql=# select datname from pg_database;\n  datname   \n------------\n postgres\n postgresql\n template1\n template0\n stream\n(5 rows)\n \npostgresql=# select pg_last_wal_receive_lsn();\n pg_last_wal_receive_lsn \n-------------------------\n 0/35000788\n(1 row)\n \npostgresql=#", "msg_date": "Tue, 11 Jan 2022 14:47:20 +0000", "msg_from": "Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>", "msg_from_op": true, "msg_subject": "Re: [Ext:] Re: Stream Replication not working" }, { "msg_contents": "Amit,\nThank you for your help in trying to understand the information that the pg_locks table is showing. Regarding your question, I am not sure who to answer it. How do I figure out which database and relation is db:16384\nand relation:12141.?\n\nThanks,\nAllie\n\nFrom: Amit Kapila <amit.kapila16@gmail.com>\nDate: Tuesday, January 11, 2022 at 6:28 AM\nTo: Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>\nCc: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>, pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: [Ext:] Re: Stream Replication not working\nIt seems both master and standby have an exclusive lock on db:16384\nand relation:12141. Which is this database/relation and why is the\napp/database holding a lock on it?\n\n\n--\nWith Regards,\nAmit Kapila.\n\nFrom: Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>\nDate: Tuesday, January 11, 2022 at 7:47 AM\nTo: Sushant Postgres <sushant.postgres@gmail.com>\nCc: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>, pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: [Ext:] Re: Stream Replication not working\nSatya,\nI am a newbie on postgresql, I have no previous experience with postgresql and I need to get this replication working. In looking at the data that the pg_lock is showing, I do not know how to interpret it.\nI will really appreciate any help you can give me in resolving this issue.\nRegards,\nAllie\n\nFrom: Sushant Postgres <sushant.postgres@gmail.com>\nDate: Tuesday, January 11, 2022 at 12:49 AM\nTo: Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>\nCc: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>, pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: [Ext:] Re: Stream Replication not working\nHi All,\n\nfor us also, logs are applying at slave server but very very slow. While checking we also have seen same set of locks to Master and Slave servers.\nPlease suggest the solution for that.\nMany Thanks in Advance !!\nThanks\n\nOn Tue, Jan 11, 2022 at 2:12 AM Allie Crawford <CrawfordMA@churchofjesuschrist.org<mailto:CrawfordMA@churchofjesuschrist.org>> wrote:\nThank you so much for your help on this Satya. I have detailed right below the output of the query you asked me to run.\n\nMaster\n\npostgresql@<master> ~>psql\n\npsql (13.5)\n\nType \"help\" for help.\n\n\n\npostgresql=# select * from pg_locks;\n\n locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath\n\n------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+---------+-----------------+---------+----------\n\n relation | 16384 | 12141 | | | | | | | | 3/6715 | 2669949 | AccessShareLock | t | t\n\n virtualxid | | | | | 3/6715 | | | | | 3/6715 | 2669949 | ExclusiveLock | t | t\n\n(2 rows)\n\n\n\npostgresql=#\n\n\nStandby\npostgresql@<standby> ~>psql\npsql (13.5)\nType \"help\" for help.\n\npostgresql=# select * from pg_locks;\n locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath\n------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+--------+-----------------+---------+----------\n relation | 16384 | 12141 | | | | | | | | 2/50 | 642064 | AccessShareLock | t | t\n virtualxid | | | | | 2/50 | | | | | 2/50 | 642064 | ExclusiveLock | t | t\n virtualxid | | | | | 1/1 | | | | | 1/0 | 17333 | ExclusiveLock | t | t\n(3 rows)\n\npostgresql=#\n\n\n\n\nFrom: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com<mailto:satyanarlapuram@gmail.com>>\nDate: Monday, January 10, 2022 at 1:06 PM\nTo: Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>\nCc: pgsql-hackers@lists.postgresql.org<mailto:pgsql-hackers@lists.postgresql.org> <pgsql-hackers@lists.postgresql.org<mailto:pgsql-hackers@lists.postgresql.org>>\nSubject: [Ext:] Re: Stream Replication not working\n[External Email]\nSeems there is a problem with the replay on your standby. Either it is too slow or stuck behind some locks ( replay_lag of 20:38:47.00904 indicates this and the flush_lsn is the same as lsn on primary ) . Run pg_locks to see if the replay is stuck behind a lock.\n\n\n\nOn Mon, Jan 10, 2022 at 11:53 AM Allie Crawford <CrawfordMA@churchofjesuschrist.org<mailto:CrawfordMA@churchofjesuschrist.org>> wrote:\nHi All,\nI have implemented Stream replication in one of my environments, and for some reason even though all the health checks are showing that the replication is working, when I run manual tests to see if changes are being replicated, the changes are not replicated to the standby postgresql environment. I have been researching for two day and I cannot find any documentation that talks about the case I am running into. I will appreciate if anybody could take a look at the details I have detailed below and give me some guidance on where the problem might be that is preventing my changes for being replicated. Even though I was able to instantiate the standby while firewalld was enabled, I decided to disable it just in case that it was causing any issue to the manual changes, but disabling firewalld has not had any effect, I am still not able to get the manual changes test to be replicated to the standby site. As you will see in the details below, the streaming is working, both sites are in sync to the latest WAL but for some reasons the latest changes are not on the standby site. How is it possible that the standby site is completely in sync but yet does not contain the latest changes?\n\nThanks in advance for any help you can give me with this problem.\n\nRegards,\nAllie\n\nDetails:\n\nMaster postgresql Environment\n\npostgresql=# select * from pg_stat_replication;\n\n-[ RECORD 1 ]----+------------------------------\n\npid | 1979089\n\nusesysid | 16404\n\nusename | replacct\n\napplication_name | walreceiver\n\nclient_addr | <standby server IP>\n\nclient_hostname | <standby server name>\n\nclient_port | 55096\n\nbackend_start | 2022-01-06 17:29:51.542784-07\n\nbackend_xmin |\n\nstate | streaming\n\nsent_lsn | 0/35000788\n\nwrite_lsn | 0/35000788\n\nflush_lsn | 0/35000788\n\nreplay_lsn | 0/31000500\n\nwrite_lag | 00:00:00.001611\n\nflush_lag | 00:00:00.001693\n\nreplay_lag | 20:38:47.00904\n\nsync_priority | 1\n\nsync_state | sync\n\nreply_time | 2022-01-07 14:11:58.996277-07\n\n\n\npostgresql=#\n\n\npostgresql=# select * from pg_roles;\n\n rolname | rolsuper | rolinherit | rolcreaterole | rolcreatedb | rolcanlogin | rolreplication | rolconnlimit | rolpassword | rolvaliduntil | rolbypassrls | rolconfig | oid\n\n---------------------------+----------+------------+---------------+-------------+-------------+----------------+--------------+-------------+---------------+--------------+-----------+-------\n\n postgresql | t | t | t | t | t | t | -1 | ******** | | t | | 10\n\n pg_monitor | f | t | f | f | f | f | -1 | ******** | | f | | 3373\n\n pg_read_all_settings | f | t | f | f | f | f | -1 | ******** | | f | | 3374\n\n pg_read_all_stats | f | t | f | f | f | f | -1 | ******** | | f | | 3375\n\n pg_stat_scan_tables | f | t | f | f | f | f | -1 | ******** | | f | | 3377\n\n pg_read_server_files | f | t | f | f | f | f | -1 | ******** | | f | | 4569\n\n pg_write_server_files | f | t | f | f | f | f | -1 | ******** | | f | | 4570\n\n pg_execute_server_program | f | t | f | f | f | f | -1 | ******** | | f | | 4571\n\n pg_signal_backend | f | t | f | f | f | f | -1 | ******** | | f | | 4200\n\n replacct | t | t | t | t | t | t | -1 | ******** | | t | | 16404\n\n(10 rows)\n\n\n\npostgresql=#\n\n\npostgresql=# create database test_replication_3;\n\nCREATE DATABASE\n\npostgresql=#\n\n\n\npostgresql=# select datname from pg_database;\n\n datname\n\n--------------------\n\n postgres\n\n postgresql\n\n template1\n\n template0\n\n stream\n\n test_replication\n\n test_replication_2\n\n test_replication_3\n\n(8 rows)\n\n\n\npostgresql=#\n\n\n\npostgresql=# SELECT pg_current_wal_lsn();\n\n pg_current_wal_lsn\n\n--------------------\n\n 0/35000788\n\n(1 row)\n\n\n\npostgresql=#\n\n\nStandby postgresql Environment\npostgresql=# select * from pg_stat_wal_receiver;\n-[ RECORD 1 ]---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npid | 17340\nstatus | streaming\nreceive_start_lsn | 0/30000000\nreceive_start_tli | 1\nwritten_lsn | 0/35000788\nflushed_lsn | 0/35000788\nreceived_tli | 1\nlast_msg_send_time | 2022-01-07 14:09:48.766823-07\nlast_msg_receipt_time | 2022-01-07 14:09:48.767581-07\nlatest_end_lsn | 0/35000788\nlatest_end_time | 2022-01-07 14:08:48.663693-07\nslot_name | wal_req_x_replica\nsender_host | <Master Server IP>\nsender_port | <Master server postgresql port#>\nconninfo | user=replacct password=******** channel_binding=prefer dbname=replication host=<Master server IP> port=<postgresql port#> fallback_application_name=walreceiver sslmode=prefer sslcompression=0 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres target_session_attrs=any\n\npostgresql=#\n\npostgresql=# select datname from pg_database;\n datname\n------------\n postgres\n postgresql\n template1\n template0\n stream\n(5 rows)\n\npostgresql=# select pg_last_wal_receive_lsn();\n pg_last_wal_receive_lsn\n-------------------------\n 0/35000788\n(1 row)\n\npostgresql=#\n\n\n\n\n\n\n\n\n\n\n\nAmit,\nThank you for your help in trying to understand the information that the pg_locks table is showing. Regarding your question, I am not sure who to answer it. How do I figure out which database and relation\n is db:16384\nand relation:12141.?\n \nThanks,\nAllie\n \nFrom:\nAmit Kapila <amit.kapila16@gmail.com>\nDate: Tuesday, January 11, 2022 at 6:28 AM\nTo: Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>\nCc: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>, pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: [Ext:] Re: Stream Replication not working\nIt seems both master and standby have an exclusive lock on db:16384\nand relation:12141. Which is this database/relation and why is the\napp/database holding a lock on it?\n\n\n-- \nWith Regards,\nAmit Kapila.\n \n\nFrom:\nAllie Crawford <CrawfordMA@ChurchofJesusChrist.org>\nDate: Tuesday, January 11, 2022 at 7:47 AM\nTo: Sushant Postgres <sushant.postgres@gmail.com>\nCc: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>, pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: [Ext:] Re: Stream Replication not working\n\nSatya,\nI am a newbie on postgresql, I have no previous experience with postgresql and I need to get this replication working. In looking at the data that the pg_lock is showing, I do not know how to interpret it.\nI will really appreciate any help you can give me in resolving this issue.\nRegards,\nAllie\n \n\nFrom:\nSushant Postgres <sushant.postgres@gmail.com>\nDate: Tuesday, January 11, 2022 at 12:49 AM\nTo: Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>\nCc: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>, pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: [Ext:] Re: Stream Replication not working\n\n\nHi All,\n\n \n\n\nfor us also, logs are applying at slave server but very very slow. While checking we also have seen same set of locks to Master and Slave servers.\n\n\nPlease suggest the solution for that.\n\n\nMany Thanks in Advance !!\n\n\nThanks\n\n\n \n\n\nOn Tue, Jan 11, 2022 at 2:12 AM Allie Crawford <CrawfordMA@churchofjesuschrist.org> wrote:\n\n\n\n\nThank you so much for your help on this Satya. I have detailed right below the output of the query you asked me to run.\n \nMaster\n\npostgresql@<master> ~>psql\npsql (13.5)\nType \"help\" for help.\n \npostgresql=#\nselect * from pg_locks;\n \nlocktype \n| database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction |\n  pid\n  |     \nmode      \n| granted | fastpath \n------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+---------+-----------------+---------+----------\n relation\n  |   \n16384 |   \n12141 |     \n|      \n|           \n|              \n|        \n|      \n|         \n| 3/6715            \n| 2669949 | AccessShareLock | t\n      \n| t\n virtualxid |         \n|         \n|     \n|      \n| 3/6715    \n|              \n|        \n|      \n|         \n| 3/6715            \n| 2669949 | ExclusiveLock \n  | t\n      \n| t\n(2 rows)\n \npostgresql=#\n \n \nStandby\npostgresql@<standby> ~>psql\npsql (13.5)\nType \"help\" for help.\n \npostgresql=# select * from pg_locks;\n  locktype  | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction\n |  pid   |      mode       | granted | fastpath \n------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+--------+-----------------+---------+----------\n relation   |    16384 |    12141 |      |       |            |               |         |       |          | 2/50  \n             | 642064 | AccessShareLock | t       | t\n virtualxid |          |          |      |       | 2/50       |               |         |       |          | 2/50  \n             | 642064 | ExclusiveLock   | t       | t\n virtualxid |          |          |      |       | 1/1        |               |         |       |          | 1/0   \n             |  17333 | ExclusiveLock   | t       | t\n(3 rows)\n \npostgresql=#\n \n \n\n\n\n \n\n\n\n \n\nFrom:\nSATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>\nDate: Monday, January 10, 2022 at 1:06 PM\nTo: Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: [Ext:] Re: Stream Replication not working\n\n[External Email]\n\n\nSeems there is a problem with the replay on your standby. Either it is too slow or stuck behind some locks ( replay_lag of 20:38:47.00904\n indicates this and the flush_lsn is the same as lsn on primary ) . Run pg_locks to see if the replay is stuck behind a lock.\n\n\n \n\n\n \n\n\n \n\n\nOn Mon, Jan 10, 2022 at 11:53 AM Allie Crawford <CrawfordMA@churchofjesuschrist.org>\n wrote:\n\n\n\n\nHi All,\nI have implemented Stream replication in one of my environments, and for some reason even though all the health checks are showing that the replication\n is working, when I run manual tests to see if changes are being replicated, the changes are not replicated to the standby postgresql environment. I have been researching for two day and I cannot find any documentation that talks about the case I am running\n into. I will appreciate if anybody could take a look at the details I have detailed below and give me some guidance on where the problem might be that is preventing my changes for being replicated. Even though I was able to instantiate the standby while firewalld\n was enabled, I decided to disable it just in case that it was causing any issue to the manual changes, but disabling firewalld has not had any effect, I am still not able to get the manual changes test to be replicated to the standby site. As you will see\n in the details below, the streaming is working, both sites are in sync to the latest WAL but for some reasons the latest changes are not on the standby site. How is it possible that the standby site is completely in sync but yet does not contain the latest\n changes?\n \nThanks in advance for any help you can give me with this problem.\n \nRegards,\nAllie\n \nDetails:\n \nMaster\npostgresql Environment\npostgresql=# select * from pg_stat_replication;\n-[ RECORD 1 ]----+------------------------------\npid             \n| 1979089\nusesysid\n       \n| 16404\nusename         \n| replacct\napplication_name | walreceiver\nclient_addr     \n| <standby server IP>\nclient_hostname \n| <standby server name>\nclient_port     \n| 55096\nbackend_start   \n| 2022-01-06 17:29:51.542784-07\nbackend_xmin\n   \n| \nstate           \n| streaming\nsent_lsn\n       \n| 0/35000788\nwrite_lsn       \n| 0/35000788\nflush_lsn       \n| 0/35000788\nreplay_lsn\n     \n| 0/31000500\nwrite_lag       \n| 00:00:00.001611\nflush_lag       \n| 00:00:00.001693\nreplay_lag\n     \n| 20:38:47.00904\nsync_priority   \n| 1\nsync_state\n     \n| sync\nreply_time\n     \n| 2022-01-07 14:11:58.996277-07\n \npostgresql=#\n \npostgresql=# select * from pg_roles;\n         \nrolname         \n| rolsuper | rolinherit | rolcreaterole | rolcreatedb | rolcanlogin | rolreplication | rolconnlimit | rolpassword | rolvaliduntil | rolbypassrls | rolconfig | \noid \n \n---------------------------+----------+------------+---------------+-------------+-------------+----------------+--------------+-------------+---------------+--------------+-----------+-------\n postgresql \n               | t       \n| t         \n| t \n           \n| t \n         \n| t \n         \n| t             \n| \n         \n-1 | ********   \n| \n             \n| t           \n| \n         \n|   \n10\n pg_monitor \n               | f       \n| t         \n| f \n           \n| f \n         \n| f \n         \n| f             \n| \n         \n-1 | ********   \n| \n             \n| f           \n| \n         \n| \n3373\n pg_read_all_settings \n     | f       \n| t         \n| f \n           \n| f \n         \n| f \n         \n| f             \n| \n         \n-1 | ********   \n| \n             \n| f           \n| \n         \n| \n3374\n pg_read_all_stats\n       \n| f       \n| t         \n| f \n           \n| f \n         \n| f \n         \n| f             \n| \n         \n-1 | ********   \n| \n             \n| f           \n| \n         \n| \n3375\n pg_stat_scan_tables\n     \n| f       \n| t         \n| f \n           \n| f \n         \n| f \n         \n| f             \n| \n         \n-1 | ********   \n| \n             \n| f           \n| \n         \n| \n3377\n pg_read_server_files \n     | f       \n| t         \n| f \n           \n| f \n         \n| f \n         \n| f             \n| \n         \n-1 | ********   \n| \n             \n| f           \n| \n         \n| \n4569\n pg_write_server_files\n   \n| f       \n| t         \n| f \n           \n| f \n         \n| f \n         \n| f             \n| \n         \n-1 | ********   \n| \n             \n| f           \n| \n         \n| \n4570\n pg_execute_server_program\n | f       \n| t         \n| f \n           \n| f \n         \n| f \n         \n| f             \n| \n         \n-1 | ********   \n| \n             \n| f           \n| \n         \n| \n4571\n pg_signal_backend\n       \n| f       \n| t         \n| f \n           \n| f \n         \n| f \n         \n| f             \n| \n         \n-1 | ********   \n| \n             \n| f           \n| \n         \n| \n4200\n replacct         \n         | t       \n| t         \n| t \n           \n| t \n         \n| t \n         \n| t             \n| \n         \n-1 | ********   \n| \n             \n| t           \n| \n         \n| 16404\n(10 rows)\n \npostgresql=#\n \npostgresql=# create database test_replication_3;\nCREATE DATABASE\npostgresql=#\n \npostgresql=# select datname from pg_database;\n     \ndatname       \n--------------------\n postgres\n postgresql\n template1\n template0\n stream\n test_replication\n test_replication_2\n test_replication_3\n(8 rows)\n \npostgresql=#\n \npostgresql=# SELECT pg_current_wal_lsn();\n pg_current_wal_lsn \n--------------------\n 0/35000788\n(1 row)\n \npostgresql=#\n \n\n \nStandby\npostgresql Environment\npostgresql=# select * from pg_stat_wal_receiver;\n-[ RECORD 1 ]---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npid                   | 17340\nstatus                | streaming\nreceive_start_lsn     | 0/30000000\nreceive_start_tli     | 1\nwritten_lsn           | 0/35000788\nflushed_lsn           | 0/35000788\nreceived_tli          | 1\nlast_msg_send_time    | 2022-01-07 14:09:48.766823-07\nlast_msg_receipt_time | 2022-01-07 14:09:48.767581-07\nlatest_end_lsn        | 0/35000788\nlatest_end_time       | 2022-01-07 14:08:48.663693-07\nslot_name             | wal_req_x_replica\nsender_host           | <Master Server IP>\nsender_port           | <Master server postgresql port#>\nconninfo              | user=replacct password=******** channel_binding=prefer dbname=replication host=<Master server\n IP> port=<postgresql port#> fallback_application_name=walreceiver sslmode=prefer sslcompression=0 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres target_session_attrs=any\n \npostgresql=#\n \npostgresql=# select datname from pg_database;\n  datname   \n------------\n postgres\n postgresql\n template1\n template0\n stream\n(5 rows)\n \npostgresql=# select pg_last_wal_receive_lsn();\n pg_last_wal_receive_lsn \n-------------------------\n 0/35000788\n(1 row)\n \npostgresql=#", "msg_date": "Tue, 11 Jan 2022 15:05:55 +0000", "msg_from": "Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>", "msg_from_op": true, "msg_subject": "Re: [Ext:] Re: Stream Replication not working" }, { "msg_contents": "Hi.\n\nAt Tue, 11 Jan 2022 15:05:55 +0000, Allie Crawford <CrawfordMA@ChurchofJesusChrist.org> wrote in \n> er it. How do I figure out which database and relation is db:16384\n> and relation:12141.?\n\nOn any database,\n\nselect datname from pg_database where oid = 16384;\n\nThen on the shown database,\n\nselect relname from pg_class where oid = 12141;\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 12 Jan 2022 10:18:05 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Ext:] Re: Stream Replication not working" }, { "msg_contents": "I was able to figure out what the problem was that was preventing my PostgreSQL Stream replication from replicating the changes even though all the health checks queries were showing that the replication had no problems.\nI am a newbie and I am learning PostgreSQL on my own, so it took me a while to figure this out, so I am sharing this with the community in case somebody else in the future runs into the same problem.\n\n 1. In reading the postgresql documentation I found a view called pg_stat_activity that shows the session activity of the database showing the status and the wait events associate with the session\n 2. In checking the session activity in the pg_stat_activity view, I was able to identify the following:\n\n-[ RECORD 2 ]----+--------------------------------\n\ndatid |\n\ndatname |\n\npid | 17333\n\nleader_pid |\n\nusesysid |\n\nusename |\n\napplication_name |\n\nclient_addr |\n\nclient_hostname |\n\nclient_port |\n\nbackend_start | 2022-01-06 17:29:51.503073-07\n\nxact_start |\n\nquery_start |\n\nstate_change |\n\nwait_event_type | IPC\n\nwait_event | RecoveryPause\n\nstate |\n\nbackend_xid |\n\nbackend_xmin |\n\nquery |\n\nbackend_type | startup\n\n\n 1. So I started to research the wait event “RecoveryPause” and I found a link to the postgresql documentation that explains all the recovery_target setttings https://www.postgresql.org/docs/9.5/recovery-target-settings.html\n\n 2. So I dediced to review all the recovery settings my cluster had in the postgresql.conf file and I found that I had the parameter recovery_target_time configured as follows recovery_target_time='2021-04-20 21:00:00 MST', and that is when I realized that this configuration was preventing me for applying the latest changes to the standby site, because this parameter basically sets the time up to which the recovery will proceed. This is the reason why all the health check queries where not showing any problems, because there was no problem at all, I just had a parameter misconfigured that was stopping the standby from applying any WAL files because the recovery target was set up to April 2021.\n 3. Once I figured this out, I disabled the recovery_target_time parameter on the standby site and bounce the postgresql cluster. As soon as the standby cluster was up and running again all the changes I did on January 6th (and that had been pending from being applied) were immediately applied and now the standby site is completely in sync with the master, and applying WAL files as they are being shipped to the standby site.\n\nThank you to all of you that sent me suggestions, even though the suggestions did not resolve the problem they gave me ideas on which direction I needed to go to continue troubleshooting the problem.\n\nRegarding the entries on the pg_locks view (see details below in this email thread), they were showing normal activity but not an actual problem, so it did not point me to the problem but allowed me to learn about the view that shows active locks in the database. Thank you again for sharing this info with me.\n\npostgresql=# select datname from pg_database where oid = 16384;\n\n datname\n\n------------\n\n postgresql\n\n(1 row)\n\n\n\npostgresql=# select relname from pg_class where oid = 12141;\n\n relname\n\n----------\n\n pg_locks\n\n(1 row)\n\n\n\npostgresql=#\n\nHave a great week everyone.\nRegards,\nAllie\n\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Tuesday, January 11, 2022 at 6:18 PM\nTo: Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>\nCc: sushant.postgres@gmail.com <sushant.postgres@gmail.com>, amit.kapila16@gmail.com <amit.kapila16@gmail.com>, satyanarlapuram@gmail.com <satyanarlapuram@gmail.com>, pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: [Ext:] Re: Stream Replication not working\nHi.\n\nAt Tue, 11 Jan 2022 15:05:55 +0000, Allie Crawford <CrawfordMA@ChurchofJesusChrist.org> wrote in\n> er it. How do I figure out which database and relation is db:16384\n> and relation:12141.?\n\nOn any database,\n\nselect datname from pg_database where oid = 16384;\n\nThen on the shown database,\n\nselect relname from pg_class where oid = 12141;\n\nregards.\n\n--\nKyotaro Horiguchi\nNTT Open Source Software Center\n\nFrom: Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>\nDate: Tuesday, January 11, 2022 at 8:05 AM\nTo: Sushant Postgres <sushant.postgres@gmail.com>, Amit Kapila <amit.kapila16@gmail.com>\nCc: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>, pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: [Ext:] Re: Stream Replication not working\nAmit,\nThank you for your help in trying to understand the information that the pg_locks table is showing. Regarding your question, I am not sure who to answer it. How do I figure out which database and relation is db:16384\nand relation:12141.?\n\nThanks,\nAllie\n\nFrom: Amit Kapila <amit.kapila16@gmail.com>\nDate: Tuesday, January 11, 2022 at 6:28 AM\nTo: Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>\nCc: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>, pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: [Ext:] Re: Stream Replication not working\nIt seems both master and standby have an exclusive lock on db:16384\nand relation:12141. Which is this database/relation and why is the\napp/database holding a lock on it?\n\n\n--\nWith Regards,\nAmit Kapila.\n\nFrom: Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>\nDate: Tuesday, January 11, 2022 at 7:47 AM\nTo: Sushant Postgres <sushant.postgres@gmail.com>\nCc: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>, pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: [Ext:] Re: Stream Replication not working\nSatya,\nI am a newbie on postgresql, I have no previous experience with postgresql and I need to get this replication working. In looking at the data that the pg_lock is showing, I do not know how to interpret it.\nI will really appreciate any help you can give me in resolving this issue.\nRegards,\nAllie\n\nFrom: Sushant Postgres <sushant.postgres@gmail.com>\nDate: Tuesday, January 11, 2022 at 12:49 AM\nTo: Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>\nCc: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>, pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: [Ext:] Re: Stream Replication not working\nHi All,\n\nfor us also, logs are applying at slave server but very very slow. While checking we also have seen same set of locks to Master and Slave servers.\nPlease suggest the solution for that.\nMany Thanks in Advance !!\nThanks\n\nOn Tue, Jan 11, 2022 at 2:12 AM Allie Crawford <CrawfordMA@churchofjesuschrist.org<mailto:CrawfordMA@churchofjesuschrist.org>> wrote:\nThank you so much for your help on this Satya. I have detailed right below the output of the query you asked me to run.\n\nMaster\n\npostgresql@<master> ~>psql\n\npsql (13.5)\n\nType \"help\" for help.\n\n\n\npostgresql=# select * from pg_locks;\n\n locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath\n\n------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+---------+-----------------+---------+----------\n\n relation | 16384 | 12141 | | | | | | | | 3/6715 | 2669949 | AccessShareLock | t | t\n\n virtualxid | | | | | 3/6715 | | | | | 3/6715 | 2669949 | ExclusiveLock | t | t\n\n(2 rows)\n\n\n\npostgresql=#\n\n\nStandby\npostgresql@<standby> ~>psql\npsql (13.5)\nType \"help\" for help.\n\npostgresql=# select * from pg_locks;\n locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath\n------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+--------+-----------------+---------+----------\n relation | 16384 | 12141 | | | | | | | | 2/50 | 642064 | AccessShareLock | t | t\n virtualxid | | | | | 2/50 | | | | | 2/50 | 642064 | ExclusiveLock | t | t\n virtualxid | | | | | 1/1 | | | | | 1/0 | 17333 | ExclusiveLock | t | t\n(3 rows)\n\npostgresql=#\n\n\n\n\nFrom: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com<mailto:satyanarlapuram@gmail.com>>\nDate: Monday, January 10, 2022 at 1:06 PM\nTo: Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>\nCc: pgsql-hackers@lists.postgresql.org<mailto:pgsql-hackers@lists.postgresql.org> <pgsql-hackers@lists.postgresql.org<mailto:pgsql-hackers@lists.postgresql.org>>\nSubject: [Ext:] Re: Stream Replication not working\n[External Email]\nSeems there is a problem with the replay on your standby. Either it is too slow or stuck behind some locks ( replay_lag of 20:38:47.00904 indicates this and the flush_lsn is the same as lsn on primary ) . Run pg_locks to see if the replay is stuck behind a lock.\n\n\n\nOn Mon, Jan 10, 2022 at 11:53 AM Allie Crawford <CrawfordMA@churchofjesuschrist.org<mailto:CrawfordMA@churchofjesuschrist.org>> wrote:\nHi All,\nI have implemented Stream replication in one of my environments, and for some reason even though all the health checks are showing that the replication is working, when I run manual tests to see if changes are being replicated, the changes are not replicated to the standby postgresql environment. I have been researching for two day and I cannot find any documentation that talks about the case I am running into. I will appreciate if anybody could take a look at the details I have detailed below and give me some guidance on where the problem might be that is preventing my changes for being replicated. Even though I was able to instantiate the standby while firewalld was enabled, I decided to disable it just in case that it was causing any issue to the manual changes, but disabling firewalld has not had any effect, I am still not able to get the manual changes test to be replicated to the standby site. As you will see in the details below, the streaming is working, both sites are in sync to the latest WAL but for some reasons the latest changes are not on the standby site. How is it possible that the standby site is completely in sync but yet does not contain the latest changes?\n\nThanks in advance for any help you can give me with this problem.\n\nRegards,\nAllie\n\nDetails:\n\nMaster postgresql Environment\n\npostgresql=# select * from pg_stat_replication;\n\n-[ RECORD 1 ]----+------------------------------\n\npid | 1979089\n\nusesysid | 16404\n\nusename | replacct\n\napplication_name | walreceiver\n\nclient_addr | <standby server IP>\n\nclient_hostname | <standby server name>\n\nclient_port | 55096\n\nbackend_start | 2022-01-06 17:29:51.542784-07\n\nbackend_xmin |\n\nstate | streaming\n\nsent_lsn | 0/35000788\n\nwrite_lsn | 0/35000788\n\nflush_lsn | 0/35000788\n\nreplay_lsn | 0/31000500\n\nwrite_lag | 00:00:00.001611\n\nflush_lag | 00:00:00.001693\n\nreplay_lag | 20:38:47.00904\n\nsync_priority | 1\n\nsync_state | sync\n\nreply_time | 2022-01-07 14:11:58.996277-07\n\n\n\npostgresql=#\n\n\npostgresql=# select * from pg_roles;\n\n rolname | rolsuper | rolinherit | rolcreaterole | rolcreatedb | rolcanlogin | rolreplication | rolconnlimit | rolpassword | rolvaliduntil | rolbypassrls | rolconfig | oid\n\n---------------------------+----------+------------+---------------+-------------+-------------+----------------+--------------+-------------+---------------+--------------+-----------+-------\n\n postgresql | t | t | t | t | t | t | -1 | ******** | | t | | 10\n\n pg_monitor | f | t | f | f | f | f | -1 | ******** | | f | | 3373\n\n pg_read_all_settings | f | t | f | f | f | f | -1 | ******** | | f | | 3374\n\n pg_read_all_stats | f | t | f | f | f | f | -1 | ******** | | f | | 3375\n\n pg_stat_scan_tables | f | t | f | f | f | f | -1 | ******** | | f | | 3377\n\n pg_read_server_files | f | t | f | f | f | f | -1 | ******** | | f | | 4569\n\n pg_write_server_files | f | t | f | f | f | f | -1 | ******** | | f | | 4570\n\n pg_execute_server_program | f | t | f | f | f | f | -1 | ******** | | f | | 4571\n\n pg_signal_backend | f | t | f | f | f | f | -1 | ******** | | f | | 4200\n\n replacct | t | t | t | t | t | t | -1 | ******** | | t | | 16404\n\n(10 rows)\n\n\n\npostgresql=#\n\n\npostgresql=# create database test_replication_3;\n\nCREATE DATABASE\n\npostgresql=#\n\n\n\npostgresql=# select datname from pg_database;\n\n datname\n\n--------------------\n\n postgres\n\n postgresql\n\n template1\n\n template0\n\n stream\n\n test_replication\n\n test_replication_2\n\n test_replication_3\n\n(8 rows)\n\n\n\npostgresql=#\n\n\n\npostgresql=# SELECT pg_current_wal_lsn();\n\n pg_current_wal_lsn\n\n--------------------\n\n 0/35000788\n\n(1 row)\n\n\n\npostgresql=#\n\n\nStandby postgresql Environment\npostgresql=# select * from pg_stat_wal_receiver;\n-[ RECORD 1 ]---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npid | 17340\nstatus | streaming\nreceive_start_lsn | 0/30000000\nreceive_start_tli | 1\nwritten_lsn | 0/35000788\nflushed_lsn | 0/35000788\nreceived_tli | 1\nlast_msg_send_time | 2022-01-07 14:09:48.766823-07\nlast_msg_receipt_time | 2022-01-07 14:09:48.767581-07\nlatest_end_lsn | 0/35000788\nlatest_end_time | 2022-01-07 14:08:48.663693-07\nslot_name | wal_req_x_replica\nsender_host | <Master Server IP>\nsender_port | <Master server postgresql port#>\nconninfo | user=replacct password=******** channel_binding=prefer dbname=replication host=<Master server IP> port=<postgresql port#> fallback_application_name=walreceiver sslmode=prefer sslcompression=0 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres target_session_attrs=any\n\npostgresql=#\n\npostgresql=# select datname from pg_database;\n datname\n------------\n postgres\n postgresql\n template1\n template0\n stream\n(5 rows)\n\npostgresql=# select pg_last_wal_receive_lsn();\n pg_last_wal_receive_lsn\n-------------------------\n 0/35000788\n(1 row)\n\npostgresql=#\n\n\n\n\n\n\n\n\n\n\n\nI was able to figure out what the problem was that was preventing my PostgreSQL Stream replication from replicating the changes even though all the health checks queries were showing that the replication had\n no problems.\nI am a newbie and I am learning PostgreSQL on my own, so it took me a while to figure this out, so I am sharing this with the community in case somebody else in the future runs into the same problem.\n\nIn reading the postgresql documentation I found a view called pg_stat_activity that shows the session activity of the database showing the status and\n the wait events associate with the sessionIn checking the session activity in the pg_stat_activity view, I was able to identify the following:\n\n-[ RECORD 2 ]----+--------------------------------\ndatid           \n| \ndatname         \n| \npid             \n| 17333\nleader_pid      \n| \nusesysid        \n| \nusename         \n| \napplication_name | \nclient_addr     \n| \nclient_hostname \n| \nclient_port     \n| \nbackend_start   \n| 2022-01-06 17:29:51.503073-07\nxact_start      \n| \nquery_start     \n| \nstate_change \n    \n| \nwait_event_type \n| IPC\nwait_event      \n| RecoveryPause\nstate           \n| \nbackend_xid     \n| \nbackend_xmin \n    \n| \nquery           \n| \nbackend_type \n    \n| startup\n\n\n\nSo I started to research the wait event “RecoveryPause” and I found a link to the postgresql documentation that explains all the recovery_target setttings\nhttps://www.postgresql.org/docs/9.5/recovery-target-settings.html\n\n\nSo I dediced to review all the recovery settings my cluster had in the postgresql.conf file and I found that I had the parameter recovery_target_time\n configured as follows recovery_target_time='2021-04-20 21:00:00 MST', and that is when I realized that this configuration was preventing me for applying the latest changes to the standby site, because this parameter basically sets the time up to which the\n recovery will proceed. This is the reason why all the health check queries where not showing any problems, because there was no problem at all, I just had a parameter misconfigured that was stopping the standby from applying any WAL files because the recovery\n target was set up to April 2021.Once I figured this out, I disabled the recovery_target_time parameter on the standby site and bounce the postgresql cluster. As soon as the standby\n cluster was up and running again all the changes I did on January 6th (and that had been pending from being applied) were immediately applied and now the standby site is completely in sync with the master, and applying WAL files  as they are being\n shipped to the standby site.\n \nThank you to all of you that sent me suggestions, even though the suggestions did not resolve the problem they gave me ideas on which direction I needed to go to continue troubleshooting the problem.\n \nRegarding the entries on the pg_locks view (see details below in this email thread), they were showing normal activity but not an actual problem, so it did not point me to the problem but allowed me to learn\n about the view that shows active locks in the database. Thank you again for sharing this info with me.\npostgresql=# select datname from pg_database where oid = 16384;\n \ndatname   \n------------\n postgresql\n(1 row)\n \npostgresql=# select relname from pg_class where oid = 12141;\n relname\n \n----------\n pg_locks\n(1 row)\n \npostgresql=#\n\nHave a great week everyone.\nRegards,\nAllie\n \nFrom:\nKyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Tuesday, January 11, 2022 at 6:18 PM\nTo: Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>\nCc: sushant.postgres@gmail.com <sushant.postgres@gmail.com>, amit.kapila16@gmail.com <amit.kapila16@gmail.com>, satyanarlapuram@gmail.com <satyanarlapuram@gmail.com>, pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: [Ext:] Re: Stream Replication not working\nHi.\n\nAt Tue, 11 Jan 2022 15:05:55 +0000, Allie Crawford <CrawfordMA@ChurchofJesusChrist.org> wrote in\n\n> er it. How do I figure out which database and relation is db:16384\n> and relation:12141.?\n\nOn any database,\n\nselect datname from pg_database where oid = 16384;\n\nThen on the shown database,\n\nselect relname from pg_class where oid = 12141;\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n \n\nFrom:\nAllie Crawford <CrawfordMA@ChurchofJesusChrist.org>\nDate: Tuesday, January 11, 2022 at 8:05 AM\nTo: Sushant Postgres <sushant.postgres@gmail.com>, Amit Kapila <amit.kapila16@gmail.com>\nCc: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>, pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: [Ext:] Re: Stream Replication not working\n\nAmit,\nThank you for your help in trying to understand the information that the pg_locks table is showing. Regarding your question, I am not sure who to answer it. How do I figure out which database and relation\n is db:16384\nand relation:12141.?\n \nThanks,\nAllie\n \nFrom:\nAmit Kapila <amit.kapila16@gmail.com>\nDate: Tuesday, January 11, 2022 at 6:28 AM\nTo: Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>\nCc: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>, pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: [Ext:] Re: Stream Replication not working\nIt seems both master and standby have an exclusive lock on db:16384\nand relation:12141. Which is this database/relation and why is the\napp/database holding a lock on it?\n\n\n-- \nWith Regards,\nAmit Kapila.\n \n\nFrom:\nAllie Crawford <CrawfordMA@ChurchofJesusChrist.org>\nDate: Tuesday, January 11, 2022 at 7:47 AM\nTo: Sushant Postgres <sushant.postgres@gmail.com>\nCc: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>, pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: [Ext:] Re: Stream Replication not working\n\nSatya,\nI am a newbie on postgresql, I have no previous experience with postgresql and I need to get this replication working. In looking at the data that the pg_lock is showing, I do not know how to interpret it.\nI will really appreciate any help you can give me in resolving this issue.\nRegards,\nAllie\n \n\nFrom:\nSushant Postgres <sushant.postgres@gmail.com>\nDate: Tuesday, January 11, 2022 at 12:49 AM\nTo: Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>\nCc: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>, pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: [Ext:] Re: Stream Replication not working\n\n\nHi All,\n\n \n\n\nfor us also, logs are applying at slave server but very very slow. While checking we also have seen same set of locks to Master and Slave servers.\n\n\nPlease suggest the solution for that.\n\n\nMany Thanks in Advance !!\n\n\nThanks\n\n\n \n\n\nOn Tue, Jan 11, 2022 at 2:12 AM Allie Crawford <CrawfordMA@churchofjesuschrist.org> wrote:\n\n\n\n\nThank you so much for your help on this Satya. I have detailed right below the output of the query you asked me to run.\n \nMaster\n\npostgresql@<master> ~>psql\npsql (13.5)\nType \"help\" for help.\n \npostgresql=#\nselect * from pg_locks;\n \nlocktype \n| database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction |\n  pid\n  |     \nmode      \n| granted | fastpath \n------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+---------+-----------------+---------+----------\n relation\n  |   \n16384 |   \n12141 |     \n|      \n|           \n|              \n|        \n|      \n|         \n| 3/6715            \n| 2669949 | AccessShareLock | t\n      \n| t\n virtualxid |         \n|         \n|     \n|      \n| 3/6715    \n|              \n|        \n|      \n|         \n| 3/6715            \n| 2669949 | ExclusiveLock \n  | t\n      \n| t\n(2 rows)\n \npostgresql=#\n \n \nStandby\npostgresql@<standby> ~>psql\npsql (13.5)\nType \"help\" for help.\n \npostgresql=# select * from pg_locks;\n  locktype  | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction\n |  pid   |      mode       | granted | fastpath \n------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+--------+-----------------+---------+----------\n relation   |    16384 |    12141 |      |       |            |               |         |       |          | 2/50  \n             | 642064 | AccessShareLock | t       | t\n virtualxid |          |          |      |       | 2/50       |               |         |       |          | 2/50  \n             | 642064 | ExclusiveLock   | t       | t\n virtualxid |          |          |      |       | 1/1        |               |         |       |          | 1/0   \n             |  17333 | ExclusiveLock   | t       | t\n(3 rows)\n \npostgresql=#\n \n \n\n\n\n \n\n\n\n \n\nFrom:\nSATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>\nDate: Monday, January 10, 2022 at 1:06 PM\nTo: Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: [Ext:] Re: Stream Replication not working\n\n[External Email]\n\n\nSeems there is a problem with the replay on your standby. Either it is too slow or stuck behind some locks ( replay_lag of 20:38:47.00904\n indicates this and the flush_lsn is the same as lsn on primary ) . Run pg_locks to see if the replay is stuck behind a lock.\n\n\n \n\n\n \n\n\n \n\n\nOn Mon, Jan 10, 2022 at 11:53 AM Allie Crawford <CrawfordMA@churchofjesuschrist.org>\n wrote:\n\n\n\n\nHi All,\nI have implemented Stream replication in one of my environments, and for some reason even though all the health checks are showing that the replication\n is working, when I run manual tests to see if changes are being replicated, the changes are not replicated to the standby postgresql environment. I have been researching for two day and I cannot find any documentation that talks about the case I am running\n into. I will appreciate if anybody could take a look at the details I have detailed below and give me some guidance on where the problem might be that is preventing my changes for being replicated. Even though I was able to instantiate the standby while firewalld\n was enabled, I decided to disable it just in case that it was causing any issue to the manual changes, but disabling firewalld has not had any effect, I am still not able to get the manual changes test to be replicated to the standby site. As you will see\n in the details below, the streaming is working, both sites are in sync to the latest WAL but for some reasons the latest changes are not on the standby site. How is it possible that the standby site is completely in sync but yet does not contain the latest\n changes?\n \nThanks in advance for any help you can give me with this problem.\n \nRegards,\nAllie\n \nDetails:\n \nMaster\npostgresql Environment\npostgresql=# select * from pg_stat_replication;\n-[ RECORD 1 ]----+------------------------------\npid             \n| 1979089\nusesysid\n       \n| 16404\nusename         \n| replacct\napplication_name | walreceiver\nclient_addr     \n| <standby server IP>\nclient_hostname \n| <standby server name>\nclient_port     \n| 55096\nbackend_start   \n| 2022-01-06 17:29:51.542784-07\nbackend_xmin\n   \n| \nstate           \n| streaming\nsent_lsn\n       \n| 0/35000788\nwrite_lsn       \n| 0/35000788\nflush_lsn       \n| 0/35000788\nreplay_lsn\n     \n| 0/31000500\nwrite_lag       \n| 00:00:00.001611\nflush_lag       \n| 00:00:00.001693\nreplay_lag\n     \n| 20:38:47.00904\nsync_priority   \n| 1\nsync_state\n     \n| sync\nreply_time\n     \n| 2022-01-07 14:11:58.996277-07\n \npostgresql=#\n \npostgresql=# select * from pg_roles;\n         \nrolname         \n| rolsuper | rolinherit | rolcreaterole | rolcreatedb | rolcanlogin | rolreplication | rolconnlimit | rolpassword | rolvaliduntil | rolbypassrls | rolconfig | \noid \n \n---------------------------+----------+------------+---------------+-------------+-------------+----------------+--------------+-------------+---------------+--------------+-----------+-------\n postgresql \n               | t       \n| t         \n| t \n           \n| t \n         \n| t \n         \n| t             \n| \n         \n-1 | ********   \n| \n             \n| t           \n| \n         \n|   \n10\n pg_monitor \n               | f       \n| t         \n| f \n           \n| f \n         \n| f \n         \n| f             \n| \n         \n-1 | ********   \n| \n             \n| f           \n| \n         \n| \n3373\n pg_read_all_settings \n     | f       \n| t         \n| f \n           \n| f \n         \n| f \n         \n| f             \n| \n         \n-1 | ********   \n| \n             \n| f           \n| \n         \n| \n3374\n pg_read_all_stats\n       \n| f       \n| t         \n| f \n           \n| f \n         \n| f \n         \n| f             \n| \n         \n-1 | ********   \n| \n             \n| f           \n| \n         \n| \n3375\n pg_stat_scan_tables\n     \n| f       \n| t         \n| f \n           \n| f \n         \n| f \n         \n| f             \n| \n         \n-1 | ********   \n| \n             \n| f           \n| \n         \n| \n3377\n pg_read_server_files \n     | f       \n| t         \n| f \n           \n| f \n         \n| f \n         \n| f             \n| \n         \n-1 | ********   \n| \n             \n| f           \n| \n         \n| \n4569\n pg_write_server_files\n   \n| f       \n| t         \n| f \n           \n| f \n         \n| f \n         \n| f             \n| \n         \n-1 | ********   \n| \n             \n| f           \n| \n         \n| \n4570\n pg_execute_server_program\n | f       \n| t         \n| f \n           \n| f \n         \n| f \n         \n| f             \n| \n         \n-1 | ********   \n| \n             \n| f           \n| \n         \n| \n4571\n pg_signal_backend\n       \n| f       \n| t         \n| f \n           \n| f \n         \n| f \n         \n| f             \n| \n         \n-1 | ********   \n| \n             \n| f           \n| \n         \n| \n4200\n replacct         \n         | t       \n| t         \n| t \n           \n| t \n         \n| t \n         \n| t             \n| \n         \n-1 | ********   \n| \n             \n| t           \n| \n         \n| 16404\n(10 rows)\n \npostgresql=#\n \npostgresql=# create database test_replication_3;\nCREATE DATABASE\npostgresql=#\n \npostgresql=# select datname from pg_database;\n     \ndatname       \n--------------------\n postgres\n postgresql\n template1\n template0\n stream\n test_replication\n test_replication_2\n test_replication_3\n(8 rows)\n \npostgresql=#\n \npostgresql=# SELECT pg_current_wal_lsn();\n pg_current_wal_lsn \n--------------------\n 0/35000788\n(1 row)\n \npostgresql=#\n \n\n \nStandby\npostgresql Environment\npostgresql=# select * from pg_stat_wal_receiver;\n-[ RECORD 1 ]---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npid                   | 17340\nstatus                | streaming\nreceive_start_lsn     | 0/30000000\nreceive_start_tli     | 1\nwritten_lsn           | 0/35000788\nflushed_lsn           | 0/35000788\nreceived_tli          | 1\nlast_msg_send_time    | 2022-01-07 14:09:48.766823-07\nlast_msg_receipt_time | 2022-01-07 14:09:48.767581-07\nlatest_end_lsn        | 0/35000788\nlatest_end_time       | 2022-01-07 14:08:48.663693-07\nslot_name             | wal_req_x_replica\nsender_host           | <Master Server IP>\nsender_port           | <Master server postgresql port#>\nconninfo              | user=replacct password=******** channel_binding=prefer dbname=replication host=<Master server\n IP> port=<postgresql port#> fallback_application_name=walreceiver sslmode=prefer sslcompression=0 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres target_session_attrs=any\n \npostgresql=#\n \npostgresql=# select datname from pg_database;\n  datname   \n------------\n postgres\n postgresql\n template1\n template0\n stream\n(5 rows)\n \npostgresql=# select pg_last_wal_receive_lsn();\n pg_last_wal_receive_lsn \n-------------------------\n 0/35000788\n(1 row)\n \npostgresql=#", "msg_date": "Wed, 19 Jan 2022 01:09:53 +0000", "msg_from": "Allie Crawford <CrawfordMA@ChurchofJesusChrist.org>", "msg_from_op": true, "msg_subject": "Re: [Ext:] Re: Stream Replication not working" } ]
[ { "msg_contents": "Has anyone discussed previously column-level security \"policies\" or how to\nbest manage/implement them as they don't exist yet?\n\nIn my mind we have great tools for database administrator users to have\ncolumn level security with grants, but not application users in a manner\nakin to RLS.\n\nMy current solution is to leverage a trigger with a whenClause that checks\nthe permissions. Imagine creating a publishing flow with authors and\npublishers on the same object:\n\nCREATE TABLE posts (\n id serial primary key,\n title text,\n content text,\n published boolean DEFAULT FALSE,\n author_id uuid NOT NULL DEFAULT get_curent_user_id(),\n publisher_id uuid NOT NULL DEFAULT\n'85d770e6-7c18-4e98-bbd5-160b512e6c23'\n);\n\nCREATE TRIGGER ensure_only_publisher_can_publish\n AFTER UPDATE ON posts\n FOR EACH ROW\n WHEN (\n NEW.publisher_id <> get_curent_user_id ()\n AND\n OLD.published IS DISTINCT FROM NEW.published\n )\nEXECUTE PROCEDURE throw_error ('OWNED_COLUMNS', 'published');\n\nCREATE TRIGGER ensure_only_publisher_can_publish_insert\n AFTER INSERT ON posts\n FOR EACH ROW\n WHEN (\n NEW.publisher_id <> get_curent_user_id ()\n AND\n NEW.published IS TRUE\n )\nEXECUTE PROCEDURE throw_error ('OWNED_COLUMNS', 'published');\n\nIf you want to run the example I've included a gist here that wraps all\ndeps in a tx:\nhttps://gist.github.com/pyramation/2a7b836ab47a2450b951a256dfe7cbde\n\nIt works! The author can create posts, and only the publisher can \"publish\"\nthem. However it has some disadvantages.\n\n 1. uses triggers, cannot use BYPASSRLS and have to use replication role\n 2. Behavior for INSERT to my knowledge requires an understanding of\n valid or default values\n\n#1 I could manage, I can imagine using the replication role if needed in\nsome places. #2 however, feels clunky and closely coupled to the data model\ngiven it requires default or whitelisted values.\n\nThoughts? Any other solutions out there I should be aware of?\n\n\n\n\n\nDan Lynch\n(734) 657-4483\n\nHas anyone discussed previously column-level security \"policies\" or how to best manage/implement them as they don't exist yet?In my mind we have great tools for database administrator users to have column level security with grants, but not application users in a manner akin to RLS. My current solution is to leverage a trigger with a whenClause that checks the permissions. Imagine creating a publishing flow with authors and publishers on the same object:CREATE TABLE posts (    id serial primary key,    title text,    content text,    published boolean DEFAULT FALSE,    author_id uuid NOT NULL DEFAULT get_curent_user_id(),    publisher_id uuid NOT NULL DEFAULT '85d770e6-7c18-4e98-bbd5-160b512e6c23');CREATE TRIGGER ensure_only_publisher_can_publish    AFTER UPDATE ON posts    FOR EACH ROW    WHEN (        NEW.publisher_id <> get_curent_user_id ()            AND        OLD.published IS DISTINCT FROM NEW.published    )EXECUTE PROCEDURE throw_error ('OWNED_COLUMNS', 'published');CREATE TRIGGER ensure_only_publisher_can_publish_insert    AFTER INSERT ON posts    FOR EACH ROW    WHEN (        NEW.publisher_id <> get_curent_user_id ()            AND        NEW.published IS TRUE    )EXECUTE PROCEDURE throw_error ('OWNED_COLUMNS', 'published');If you want to run the example I've included a gist here that wraps all deps in a tx: https://gist.github.com/pyramation/2a7b836ab47a2450b951a256dfe7cbdeIt works! The author can create posts, and only the publisher can \"publish\" them. However it has some disadvantages.uses triggers, cannot use BYPASSRLS and have to use replication role Behavior for INSERT to my knowledge requires an understanding of valid or default values#1 I could manage, I can imagine using the replication role if needed in some places. #2 however, feels clunky and closely coupled to the data model given it requires default or whitelisted values.Thoughts? Any other solutions out there I should be aware of?Dan Lynch(734) 657-4483", "msg_date": "Mon, 19 Apr 2021 15:33:52 -0700", "msg_from": "Dan Lynch <pyramation@gmail.com>", "msg_from_op": true, "msg_subject": "column-level security policies for application users" } ]
[ { "msg_contents": "Dear team ,\n\nhi, I am sending this email to propose to join PostgreSQl program to\nenhance my skills and to keep up with market needs so kindly accept my\nproposal .\n\nthanks & regards\nkhaled\n\nDear team ,hi, I am sending this email to propose to join PostgreSQl program to enhance my skills and to keep up with market needs  so kindly accept my proposal .thanks & regards khaled", "msg_date": "Tue, 20 Apr 2021 03:17:08 +0300", "msg_from": "Khaled Anas <khaled.anasabbas82@gmail.com>", "msg_from_op": true, "msg_subject": "proposal for PostgreSQL program" }, { "msg_contents": "On Tue, Apr 20, 2021 at 03:17:08AM +0300, Khaled Anas wrote:\n> Dear team ,\n> \n> hi, I am sending this email to propose to join�PostgreSQl program to enhance my\n> skills and to keep up with market needs� so kindly accept my proposal .\n\nUh, there isn't an official joining process. You should read the FAQs\nand then subscribe to the appropriate email list.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 21 Apr 2021 18:14:13 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: proposal for PostgreSQL program" } ]
[ { "msg_contents": "Hi all,\n\nLike every year, I have done some tests with wal_consistency_checking\nto see if any inconsistencies have been introduced in WAL replay. And\nthe good news is that I have noticed nothing to worry about.\n\nThanks,\n--\nMichael", "msg_date": "Tue, 20 Apr 2021 11:42:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "HEAD looks clean with wal_consistency_checking = all" } ]
[ { "msg_contents": "Hi Peter,\n\nWhile testing wal_consistency_checking, I have noticed that by far\nmost of the runtime is spent within the regression test check_btree on\nthe series of three queries inserting each 100k tuples. This also\neats most of the run time of the test on HEAD. Could we for example\nconsider inserting less tuples with a lower fillfactor to reduce the\nruntime of the test without impacting its coverage in a meaningful\nway?\n\nThanks,\n--\nMichael", "msg_date": "Tue, 20 Apr 2021 11:50:34 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "amcheck eating most of the runtime with wal_consistency_checking " }, { "msg_contents": "On Mon, Apr 19, 2021 at 7:50 PM Michael Paquier <michael@paquier.xyz> wrote:\n> While testing wal_consistency_checking, I have noticed that by far\n> most of the runtime is spent within the regression test check_btree on\n> the series of three queries inserting each 100k tuples. This also\n> eats most of the run time of the test on HEAD. Could we for example\n> consider inserting less tuples with a lower fillfactor to reduce the\n> runtime of the test without impacting its coverage in a meaningful\n> way?\n\nI don't see much point. wal_consistency_checking is intrinsically a\ntool that increases the volume of WAL by a large multiple. Plus you\nyourself only run it once a year.\n\nI run it much more often than once a year (maybe once every 2 - 3\nmonths), but I haven't noticed this at all.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 19 Apr 2021 19:58:37 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: amcheck eating most of the runtime with wal_consistency_checking" } ]
[ { "msg_contents": "There is an omission of automatic completion of CURRENT_ROLE in tab-complete.c.\n\nBest wishes,\nWei Wang", "msg_date": "Tue, 20 Apr 2021 03:28:38 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "An omission of automatic completion in tab-complete.c" }, { "msg_contents": "On Tue, Apr 20, 2021 at 03:28:38AM +0000, wangw.fnst@fujitsu.com wrote:\n> There is an omission of automatic completion of CURRENT_ROLE in tab-complete.c.\n\nIndeed, that looks like an omission from 45b9805.\n--\nMichael", "msg_date": "Tue, 20 Apr 2021 16:41:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: An omission of automatic completion in tab-complete.c" }, { "msg_contents": "> There is an omission of automatic completion of CURRENT_ROLE in tab-complete.c.\n\nI invested some time in checking this patch. It passes make\ncheck-world / make installcheck-world and adds CURRENT_ROLE to the\nautomatic completion.\n\n--\nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 20 Apr 2021 11:35:13 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: An omission of automatic completion in tab-complete.c" }, { "msg_contents": "On Tue, Apr 20, 2021 at 11:35:13AM +0300, Aleksander Alekseev wrote:\n> I invested some time in checking this patch. It passes make\n> check-world / make installcheck-world and adds CURRENT_ROLE to the\n> automatic completion.\n\nThanks Aleksander and Wei. Applied.\n--\nMichael", "msg_date": "Wed, 21 Apr 2021 10:49:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: An omission of automatic completion in tab-complete.c" } ]
[ { "msg_contents": "Hello.\n\nIt seems to me that there's a stale description in the documentation\nof pg_basebackup.\n\nhttps://www.postgresql.org/docs/13/app-pgbasebackup.html\n\n> Note that there are some limitations in taking a backup from a standby:\n...\n> If you are using -X none, there is no guarantee that all WAL files\n> required for the backup are archived at the end of backup.\n\nActually, pg_basebackup waits for the all required files to be\narchived, which is an established behavior by commit\n52f8a59dd9@PG10. However, the same commit seems to have forgot to\nchange the doc for pg_basebackup. (The current description is\nintroduced by 9a4d51077c@PG10)\n\nThe attached is a proposal to rewrite it as the following.\n\n+ If you are using -X none, pg_basebackup may wait for a long time for\n+ all the required WAL files to be archived. In that case, You may need\n+ to call pg_switch_wal() on the primary to complete it sooner.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 20 Apr 2021 13:32:35 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Stale description for pg_basebackup" }, { "msg_contents": "At Tue, 20 Apr 2021 13:32:35 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Hello.\n> \n> It seems to me that there's a stale description in the documentation\n> of pg_basebackup.\n> \n> https://www.postgresql.org/docs/13/app-pgbasebackup.html\n> \n> > Note that there are some limitations in taking a backup from a standby:\n> ...\n> > If you are using -X none, there is no guarantee that all WAL files\n> > required for the backup are archived at the end of backup.\n> \n> Actually, pg_basebackup waits for the all required files to be\n> archived, which is an established behavior by commit\n> 52f8a59dd9@PG10. However, the same commit seems to have forgot to\n> change the doc for pg_basebackup. (The current description is\n> introduced by 9a4d51077c@PG10)\n> \n> The attached is a proposal to rewrite it as the following.\n> \n> + If you are using -X none, pg_basebackup may wait for a long time for\n> + all the required WAL files to be archived. In that case, You may need\n> + to call pg_switch_wal() on the primary to complete it sooner.\n\nI forgot to preserve the description about *primary*. It should be as\nthe following instead.\n\n+ If you are using -X none, there is no guarantee on the primary that\n+ all WAL files required for the backup are archived at the end of\n+ backup. When the standby is configured as archive_mode=always,\n+ pg_basebackup may wait for a long time for all the required WAL files\n+ to be archived. In that case, You may need to call pg_switch_wal() on\n+ the primary to complete it sooner.\n\nAttached.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 21 Apr 2021 10:43:30 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Stale description for pg_basebackup" }, { "msg_contents": "At Wed, 21 Apr 2021 10:43:30 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Tue, 20 Apr 2021 13:32:35 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > Hello.\n> > \n> > It seems to me that there's a stale description in the documentation\n> > of pg_basebackup.\n> > \n> > https://www.postgresql.org/docs/13/app-pgbasebackup.html\n> > \n> > > Note that there are some limitations in taking a backup from a standby:\n> > ...\n> > > If you are using -X none, there is no guarantee that all WAL files\n> > > required for the backup are archived at the end of backup.\n> > \n> > Actually, pg_basebackup waits for the all required files to be\n> > archived, which is an established behavior by commit\n> > 52f8a59dd9@PG10. However, the same commit seems to have forgot to\n> > change the doc for pg_basebackup. (The current description is\n> > introduced by 9a4d51077c@PG10)\n> > \n> > The attached is a proposal to rewrite it as the following.\n> > \n> > + If you are using -X none, pg_basebackup may wait for a long time for\n> > + all the required WAL files to be archived. In that case, You may need\n> > + to call pg_switch_wal() on the primary to complete it sooner.\n> \n> I forgot to preserve the description about *primary*. It should be as\n> the following instead.\n> \n> + If you are using -X none, there is no guarantee on the primary that\n> + all WAL files required for the backup are archived at the end of\n> + backup. When the standby is configured as archive_mode=always,\n> + pg_basebackup may wait for a long time for all the required WAL files\n> + to be archived. In that case, You may need to call pg_switch_wal() on\n> + the primary to complete it sooner.\n\nHmm. Some words need to be qualified. Attached.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 21 Apr 2021 11:09:37 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Stale description for pg_basebackup" }, { "msg_contents": "\n\nOn 2021/04/21 11:09, Kyotaro Horiguchi wrote:\n> At Wed, 21 Apr 2021 10:43:30 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n>> At Tue, 20 Apr 2021 13:32:35 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n>>> Hello.\n>>>\n>>> It seems to me that there's a stale description in the documentation\n>>> of pg_basebackup.\n\nI think you're right.\n\n\n> Hmm. Some words need to be qualified. Attached.\n\n+ If you are using <literal>-X none</literal>, there is no guarantee on\n+ the primary that all WAL files required for the backup are archived at\n+ the end of backup.\n\nI don't think that this should be picked up as a limitation of standby backup.\nBecause users basically want to make pg_basebackup wait for all required\nWAL files to be archived on the standby, in the standby backup case.\n\n\nWhen <varname>archive_mode</varname> is set\n+ to <literal>on</literal> on the\n\n\"on\" should be \"always\"?\n\n\n+ standby, <application>pg_basebackup</application> may wait for a long\n+ time for all the required WAL files to be archived. In that case, You\n+ may need to call <function>pg_switch_wal()</function> on the primary to\n+ complete it sooner.\n\nWhat about the following description?\n\n-------------------\nWhen you are using -X none, if write activity on the primary is low,\npg_basebackup may need to wait a long time for all WAL files required for\nthe backup to be archived. It may be useful to run pg_switch_wal\non the primary in order to trigger an immediate WAL file switch and archiving.\n-------------------\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 21 Apr 2021 23:06:56 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Stale description for pg_basebackup" }, { "msg_contents": "Ugg. I was confused.\n\n\nAt Wed, 21 Apr 2021 23:06:56 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> > Hmm. Some words need to be qualified. Attached.\n> \n> + If you are using <literal>-X none</literal>, there is no guarantee\n> on\n> + the primary that all WAL files required for the backup are archived\n> at\n> + the end of backup.\n> \n> I don't think that this should be picked up as a limitation of standby\n> backup.\n> Because users basically want to make pg_basebackup wait for all\n> required\n> WAL files to be archived on the standby, in the standby backup case.\n\nYeah, you're right. I think it is what I thought at first. The last\nproposal is a result of some confusion..\n\n> When <varname>archive_mode</varname> is set\n> + to <literal>on</literal> on the\n> \n> \"on\" should be \"always\"?\n\nYes..\n\n> + standby, <application>pg_basebackup</application> may wait for a\n> long\n> + time for all the required WAL files to be archived. In that case,\n> You\n> + may need to call <function>pg_switch_wal()</function> on the primary\n> to\n> + complete it sooner.\n> \n> What about the following description?\n> \n> -------------------\n> When you are using -X none, if write activity on the primary is low,\n> pg_basebackup may need to wait a long time for all WAL files required\n> for\n> the backup to be archived. It may be useful to run pg_switch_wal\n> on the primary in order to trigger an immediate WAL file switch and\n> archiving.\n> -------------------\n\nLooks far better.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 22 Apr 2021 09:25:33 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Stale description for pg_basebackup" }, { "msg_contents": "On 2021/04/22 9:25, Kyotaro Horiguchi wrote:\n>> What about the following description?\n>>\n>> -------------------\n>> When you are using -X none, if write activity on the primary is low,\n>> pg_basebackup may need to wait a long time for all WAL files required\n>> for\n>> the backup to be archived. It may be useful to run pg_switch_wal\n>> on the primary in order to trigger an immediate WAL file switch and\n>> archiving.\n>> -------------------\n> \n> Looks far better.\n\nPatch attached. I appended the following description to assist\nusers to understand why pg_basebackup may need wait a long time\nwhen write activity is low on the primary.\n\n------------------\npg_basebackup cannot force the standby to switch to\na new WAL file at the end of backup.\n------------------\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 22 Apr 2021 10:56:10 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Stale description for pg_basebackup" }, { "msg_contents": "At Thu, 22 Apr 2021 10:56:10 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2021/04/22 9:25, Kyotaro Horiguchi wrote:\n> >> What about the following description?\n> >>\n> >> -------------------\n> >> When you are using -X none, if write activity on the primary is low,\n> >> pg_basebackup may need to wait a long time for all WAL files required\n> >> for\n> >> the backup to be archived. It may be useful to run pg_switch_wal\n> >> on the primary in order to trigger an immediate WAL file switch and\n> >> archiving.\n> >> -------------------\n> > Looks far better.\n> \n> Patch attached. I appended the following description to assist\n> users to understand why pg_basebackup may need wait a long time\n> when write activity is low on the primary.\n> \n> ------------------\n> pg_basebackup cannot force the standby to switch to\n> a new WAL file at the end of backup.\n> ------------------\n\nI'm not sure which is the convention here, but I saw that some\nfunction names in the doc are followed by parentheses (ie\npg_switch_wal()).\n\n(prepended?) It seems a bit redundant but also a bit clearer. How\nabout the following simplification?\n\n- It may be useful to run pg_switch_wal on the primary in order to\n- trigger an immediate WAL file switch and archiving.\n+ It may be useful to run pg_switch_wal() on the primary in that case.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 22 Apr 2021 11:19:40 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Stale description for pg_basebackup" }, { "msg_contents": "\n\nOn 2021/04/22 11:19, Kyotaro Horiguchi wrote:\n> At Thu, 22 Apr 2021 10:56:10 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>\n>>\n>> On 2021/04/22 9:25, Kyotaro Horiguchi wrote:\n>>>> What about the following description?\n>>>>\n>>>> -------------------\n>>>> When you are using -X none, if write activity on the primary is low,\n>>>> pg_basebackup may need to wait a long time for all WAL files required\n>>>> for\n>>>> the backup to be archived. It may be useful to run pg_switch_wal\n>>>> on the primary in order to trigger an immediate WAL file switch and\n>>>> archiving.\n>>>> -------------------\n>>> Looks far better.\n>>\n>> Patch attached. I appended the following description to assist\n>> users to understand why pg_basebackup may need wait a long time\n>> when write activity is low on the primary.\n>>\n>> ------------------\n>> pg_basebackup cannot force the standby to switch to\n>> a new WAL file at the end of backup.\n>> ------------------\n> \n> I'm not sure which is the convention here, but I saw that some\n> function names in the doc are followed by parentheses (ie\n> pg_switch_wal()).\n\nEither works for me. I didn't add \"()\" because I just used the same description\nas that in func.sgml.\n\n it may be useful to run <function>pg_switch_wal</function> on the\n primary in order to trigger an immediate segment switch.)\n\n\n> (prepended?) It seems a bit redundant but also a bit clearer. How\n> about the following simplification?\n> \n> - It may be useful to run pg_switch_wal on the primary in order to\n> - trigger an immediate WAL file switch and archiving.\n> + It may be useful to run pg_switch_wal() on the primary in that case.\n\nIMO \"in order to...\" part is helpful for us to understand why pg_switch_wal\nis useful in this case. So I'd like to leave it.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 22 Apr 2021 13:06:50 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Stale description for pg_basebackup" }, { "msg_contents": "At Thu, 22 Apr 2021 13:06:50 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> Either works for me. I didn't add \"()\" because I just used the same\n> description\n> as that in func.sgml.\n> \n> it may be useful to run <function>pg_switch_wal</function> on the\n> primary in order to trigger an immediate segment switch.)\n..\n> IMO \"in order to...\" part is helpful for us to understand why\n> pg_switch_wal\n> is useful in this case. So I'd like to leave it.\n\nOk, I'm fine with both of them. Thanks for the explanation.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 22 Apr 2021 13:25:27 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Stale description for pg_basebackup" }, { "msg_contents": "\n\nOn 2021/04/22 13:25, Kyotaro Horiguchi wrote:\n> At Thu, 22 Apr 2021 13:06:50 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>> Either works for me. I didn't add \"()\" because I just used the same\n>> description\n>> as that in func.sgml.\n>>\n>> it may be useful to run <function>pg_switch_wal</function> on the\n>> primary in order to trigger an immediate segment switch.)\n> ..\n>> IMO \"in order to...\" part is helpful for us to understand why\n>> pg_switch_wal\n>> is useful in this case. So I'd like to leave it.\n> \n> Ok, I'm fine with both of them. Thanks for the explanation.\n\nPushed. Thanks!\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 23 Apr 2021 15:53:02 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Stale description for pg_basebackup" } ]
[ { "msg_contents": "Hi,\n\nJust an observation: on REL_13_STABLE, $SUBJECT maps in ~170MB of\nmemory, and on master it's ~204MB. A backend running that was just\nnuked by the kernel due to lack of swap space on my tiny buildfarm\nanimal elver (a VM with 1GB RAM, 2GB swap, not doing much else).\nCould also be related to an OS upgrade ~1 week ago. It's obviously\ntime to increase the VM size which is no problem, but I thought those\nnumbers were interesting.\n\n\n", "msg_date": "Tue, 20 Apr 2021 16:43:07 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "select 'x' ~ repeat('x*y*z*', 1000);" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Just an observation: on REL_13_STABLE, $SUBJECT maps in ~170MB of\n> memory, and on master it's ~204MB. A backend running that was just\n> nuked by the kernel due to lack of swap space on my tiny buildfarm\n> animal elver (a VM with 1GB RAM, 2GB swap, not doing much else).\n\nYeah, that's not terribly surprising. Note that the point of that\ntest case is to fail: it's supposed to verify that we apply the\nREG_MAX_COMPILE_SPACE limit and error out before hitting a kernel\nOOM condition. When I redid regex memory allocation in 0fc1af174,\nthere was a question of how to map the old complexity limit to the\nnew one. I went with\n\n #define REG_MAX_COMPILE_SPACE \\\n- (100000 * sizeof(struct state) + 100000 * sizeof(struct arcbatch))\n+ (500000 * (sizeof(struct state) + 4 * sizeof(struct arc)))\n #endif\n\nknowing that that was a bit of a scale-up of the limit, but intending\nthat we'd not fail on any case that worked before. We could knock\ndown the 500000 multiplier a little, but I'm afraid we'd lose the\nit-still-works property, because the edge cases are a little different\nfor the new code than the old.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 20 Apr 2021 01:34:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: select 'x' ~ repeat('x*y*z*', 1000);" } ]
[ { "msg_contents": "Hi hackers,\n\nAs I played with the partitioned table with GRANT, I found two questions.\nLet's see an example:\n\n\nCREATE TABLE measurement (\n city_id int not null,\n logdate date not null,\n peaktemp int,\n unitsales int\n) PARTITION BY RANGE (logdate);\n\nCREATE TABLE measurement_y2006m02 PARTITION OF measurement\n FOR VALUES FROM ('2006-02-01') TO ('2006-03-01');\n\nCREATE TABLE measurement_y2006m03 PARTITION OF measurement\n FOR VALUES FROM ('2006-03-01') TO ('2006-04-01');\n\nCREATE USER a;\nGRANT SELECT ON measurement TO a;\nGRANT INSERT ON measurement TO a;\n\nI created a partitioned table with two leaf tables and only grant SELECT, INSERT on the root table to user a.\n\nThe first question is:\nAs a user a, since I don't have permission to read the leaf tables, but select from the root will return the leafs data successfully.\n\npostgres=# set role a;\npostgres=> explain select * from measurement_y2006m02;\nERROR: permission denied for table measurement_y2006m02\npostgres=> explain select * from measurement;\n QUERY PLAN\n---------------------------------------------------------------------------------------------\n Append (cost=0.00..75.50 rows=3700 width=16)\n -> Seq Scan on measurement_y2006m02 measurement_1 (cost=0.00..28.50 rows=1850 width=16)\n -> Seq Scan on measurement_y2006m03 measurement_2 (cost=0.00..28.50 rows=1850 width=16)\n(3 rows)\n\nFrom the plan, we do scan on the leaf tables without ACL check. And the reason is in expand_single_inheritance_child,\nwe always set childrte->requiredPerms = 0; Seems like we always think the child has the same permission with the partitioned table.\n\n\nFor the second question:\nAs a user a, I'm not allowed to insert any data into leaf tables.\nBut insert on the partitioned table will make the data go into leaves.\n\npostgres=> insert into measurement_y2006m02 values (1, '2006-02-01', 1, 1);\nERROR: permission denied for table measurement_y2006m02\npostgres=> insert into measurement values (1, '2006-02-01', 1, 1);\nINSERT 0 1\n\nIt makes me feel strange, we can grant different permission for partition tables, but as long as the user\nhas permission on the partitioned table, it can still see/modify the leaf tables which don't have permission.\nCan anyone help me understand the behavior?", "msg_date": "Tue, 20 Apr 2021 07:24:40 +0000", "msg_from": "Junfeng Yang <yjerome@vmware.com>", "msg_from_op": true, "msg_subject": "Partitioned table permission question" }, { "msg_contents": "On Tue, Apr 20, 2021 at 9:00 PM Junfeng Yang <yjerome@vmware.com> wrote:\n> Hi hackers,\n>\n> As I played with the partitioned table with GRANT, I found two questions.\n> Let's see an example:\n>\n>\n> CREATE TABLE measurement (\n> city_id int not null,\n> logdate date not null,\n> peaktemp int,\n> unitsales int\n> ) PARTITION BY RANGE (logdate);\n>\n> CREATE TABLE measurement_y2006m02 PARTITION OF measurement\n> FOR VALUES FROM ('2006-02-01') TO ('2006-03-01');\n>\n> CREATE TABLE measurement_y2006m03 PARTITION OF measurement\n> FOR VALUES FROM ('2006-03-01') TO ('2006-04-01');\n>\n> CREATE USER a;\n> GRANT SELECT ON measurement TO a;\n> GRANT INSERT ON measurement TO a;\n>\n> I created a partitioned table with two leaf tables and only grant SELECT, INSERT on the root table to user a.\n>\n> The first question is:\n> As a user a, since I don't have permission to read the leaf tables, but select from the root will return the leafs data successfully.\n>\n> postgres=# set role a;\n> postgres=> explain select * from measurement_y2006m02;\n> ERROR: permission denied for table measurement_y2006m02\n> postgres=> explain select * from measurement;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------\n> Append (cost=0.00..75.50 rows=3700 width=16)\n> -> Seq Scan on measurement_y2006m02 measurement_1 (cost=0.00..28.50 rows=1850 width=16)\n> -> Seq Scan on measurement_y2006m03 measurement_2 (cost=0.00..28.50 rows=1850 width=16)\n> (3 rows)\n>\n> From the plan, we do scan on the leaf tables without ACL check. And the reason is in expand_single_inheritance_child,\n> we always set childrte->requiredPerms = 0; Seems like we always think the child has the same permission with the partitioned table.\n>\n>\n> For the second question:\n> As a user a, I'm not allowed to insert any data into leaf tables.\n> But insert on the partitioned table will make the data go into leaves.\n>\n> postgres=> insert into measurement_y2006m02 values (1, '2006-02-01', 1, 1);\n> ERROR: permission denied for table measurement_y2006m02\n> postgres=> insert into measurement values (1, '2006-02-01', 1, 1);\n> INSERT 0 1\n>\n> It makes me feel strange, we can grant different permission for partition tables, but as long as the user\n> has permission on the partitioned table, it can still see/modify the leaf tables which don't have permission.\n> Can anyone help me understand the behavior?\n\nPermission model of partitioning is same as traditional table\ninheritance, about which we write the following in the documentation\n[1]:\n\n\"Inherited queries perform access permission checks on the parent\ntable only. Thus, for example, granting UPDATE permission on the\ncities table implies permission to update rows in the capitals table\nas well, when they are accessed through cities. This preserves the\nappearance that the data is (also) in the parent table. But the\ncapitals table could not be updated directly without an additional\ngrant. In a similar way, the parent table's row security policies (see\nSection 5.8) are applied to rows coming from child tables during an\ninherited query. A child table's policies, if any, are applied only\nwhen it is the table explicitly named in the query; and in that case,\nany policies attached to its parent(s) are ignored.\"\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/docs/current/ddl-inherit.html\n\n\n", "msg_date": "Tue, 20 Apr 2021 21:17:42 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Partitioned table permission question" }, { "msg_contents": "I see. Thanks for your explanation!", "msg_date": "Wed, 21 Apr 2021 00:19:11 +0000", "msg_from": "Junfeng Yang <yjerome@vmware.com>", "msg_from_op": true, "msg_subject": "\n =?gb2312?B?u9i4tDogUGFydGl0aW9uZWQgdGFibGUgcGVybWlzc2lvbiBxdWVzdGlvbg==?=" } ]
[ { "msg_contents": "Hello,\n\n\nI think we've found a few existing problems with handling the parallel safety of functions while doing an experiment. Could I hear your opinions on what we should do? I'd be willing to create and submit a patch to fix them.\n\nThe experiment is to add a parallel safety check in FunctionCallInvoke() and run the regression test with force_parallel_mode=regress. The added check errors out with ereport(ERROR) when the about-to-be-called function is parallel unsafe and the process is currently in parallel mode. 6 test cases failed because the following parallel-unsafe functions were called:\n\n dsnowball_init\n balkifnull\n int44out\n text_w_default_out\n widget_out\n\nThe first function is created in src/backend/snowball/snowball_create.sql for full text search. The remaining functions are created during the regression test run.\n\nThe relevant issues follow.\n\n\n(1)\nAll the above functions are actually parallel safe looking at their implementations. It seems that their CREATE FUNCTION statements are just missing PARALLEL SAFE specifications, so I think I'll add them. dsnowball_lexize() may also be parallel safe.\n\n\n(2)\nI'm afraid the above phenomenon reveals that postgres overlooks parallel safety checks in some places. Specifically, we noticed the following:\n\n* User-defined aggregate\nCREATE AGGREGATE allows to specify parallel safety of the aggregate itself and the planner checks it, but the support function of the aggregate is not checked. OTOH, the document clearly says:\n\nhttps://www.postgresql.org/docs/devel/xaggr.html\n\n\"Worth noting also is that for an aggregate to be executed in parallel, the aggregate itself must be marked PARALLEL SAFE. The parallel-safety markings on its support functions are not consulted.\"\n\nhttps://www.postgresql.org/docs/devel/sql-createaggregate.html\n\n\"An aggregate will not be considered for parallelization if it is marked PARALLEL UNSAFE (which is the default!) or PARALLEL RESTRICTED. Note that the parallel-safety markings of the aggregate's support functions are not consulted by the planner, only the marking of the aggregate itself.\"\n\nCan we check the parallel safety of aggregate support functions during statement execution and error out? Is there any reason not to do so?\n\n* User-defined data type\nThe input, output, send,receive, and other functions of a UDT are not checked for parallel safety. Is there any good reason to not check them other than the concern about performance?\n\n* Functions for full text search\nShould CREATE TEXT SEARCH TEMPLATE ensure that the functions are parallel safe? (Those functions could be changed to parallel unsafe later with ALTER FUNCTION, though.)\n\n\n(3) Built-in UDFs are not checked for parallel safety\nThe functions defined in fmgr_builtins[], which are derived from pg_proc.dat, are not checked. Most of them are marked parallel safe, but some are paralel unsaferestricted.\n\nBesides, changing their parallel safety with ALTER FUNCTION PARALLEL does not affect the selection of query plan. This is because fmgr_builtins[] does not have a member for parallel safety.\n\nShould we add a member for parallel safety in fmgr_builtins[], and disallow ALTER FUNCTION to change the parallel safety of builtin UDFs?\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n", "msg_date": "Tue, 20 Apr 2021 08:52:46 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": true, "msg_subject": "[bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Tue, Apr 20, 2021 at 2:23 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n> (2)\n> I'm afraid the above phenomenon reveals that postgres overlooks parallel safety checks in some places. Specifically, we noticed the following:\n>\n> * User-defined aggregate\n> CREATE AGGREGATE allows to specify parallel safety of the aggregate itself and the planner checks it, but the support function of the aggregate is not checked. OTOH, the document clearly says:\n>\n> https://www.postgresql.org/docs/devel/xaggr.html\n>\n> \"Worth noting also is that for an aggregate to be executed in parallel, the aggregate itself must be marked PARALLEL SAFE. The parallel-safety markings on its support functions are not consulted.\"\n>\n> https://www.postgresql.org/docs/devel/sql-createaggregate.html\n>\n> \"An aggregate will not be considered for parallelization if it is marked PARALLEL UNSAFE (which is the default!) or PARALLEL RESTRICTED. Note that the parallel-safety markings of the aggregate's support functions are not consulted by the planner, only the marking of the aggregate itself.\"\n\nIMO, the reason for not checking the parallel safety of the support\nfunctions is that the functions themselves can have whole lot of other\nfunctions (can be nested as well) which might be quite hard to check\nat the planning time. That is why the job of marking an aggregate as\nparallel safe is best left to the user. They have to mark the aggreage\nparallel unsafe if at least one support function is parallel unsafe,\notherwise parallel safe.\n\n> Can we check the parallel safety of aggregate support functions during statement execution and error out? Is there any reason not to do so?\n\nAnd if we were to do above, within the function execution API, we need\nto know where the function got called from(?). It is best left to the\nuser to decide whether a function/aggregate is parallel safe or not.\nThis is the main reason we have declarative constructs like parallel\nsafe/unsafe/restricted.\n\nFor core functions, we definitely should properly mark parallel\nsafe/restricted/unsafe tags wherever possible.\n\nPlease correct me If I miss something.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 20 Apr 2021 15:06:30 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> On Tue, Apr 20, 2021 at 2:23 PM tsunakawa.takay@fujitsu.com\n> <tsunakawa.takay@fujitsu.com> wrote:\n>> https://www.postgresql.org/docs/devel/xaggr.html\n>> \n>> \"Worth noting also is that for an aggregate to be executed in parallel, the aggregate itself must be marked PARALLEL SAFE. The parallel-safety markings on its support functions are not consulted.\"\n\n> IMO, the reason for not checking the parallel safety of the support\n> functions is that the functions themselves can have whole lot of other\n> functions (can be nested as well) which might be quite hard to check\n> at the planning time. That is why the job of marking an aggregate as\n> parallel safe is best left to the user.\n\nYes. I think the documentation is perfectly clear that this is\nintentional; I don't see a need to change it.\n\n>> Should we add a member for parallel safety in fmgr_builtins[], and disallow ALTER FUNCTION to change the parallel safety of builtin UDFs?\n\nNo. You'd have to be superuser anyway to do that, and we're not in the\nhabit of trying to put training wheels on superusers.\n\nDon't have an opinion about the other points yet.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 20 Apr 2021 10:49:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "From: Tom Lane <tgl@sss.pgh.pa.us>\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > IMO, the reason for not checking the parallel safety of the support\n> > functions is that the functions themselves can have whole lot of other\n> > functions (can be nested as well) which might be quite hard to check\n> > at the planning time. That is why the job of marking an aggregate as\n> > parallel safe is best left to the user.\n> \n> Yes. I think the documentation is perfectly clear that this is\n> intentional; I don't see a need to change it.\n\nOK, that's what I expected. I understood from this that the Postgres's stance toward parallel safety is that Postgres does its best effort to check parallel safety (as far as it doesn't hurt performance much, and perhaps the core code doesn't get very complex), and the user should be responsible for the actual parallel safety of ancillary objects (in this case, support functions for an aggregate) of the target object that he/she marked as parallel safe.\n\n\n> >> Should we add a member for parallel safety in fmgr_builtins[], and disallow\n> ALTER FUNCTION to change the parallel safety of builtin UDFs?\n> \n> No. You'd have to be superuser anyway to do that, and we're not in the\n> habit of trying to put training wheels on superusers.\n\nUnderstood. However, we may add the parallel safety member in fmgr_builtins[] in another thread for parallel INSERT SELECT. I'd appreciate your comment on this if you see any concern.\n\n\n> Don't have an opinion about the other points yet.\n\nI'd like to have your comments on them, too. But I understand you must be so busy at least until the beta release of PG 14.\n\n\nRegards\nTakayuki Tsunakawa\n\t\n\n\n\n", "msg_date": "Wed, 21 Apr 2021 01:56:11 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> writes:\n> From: Tom Lane <tgl@sss.pgh.pa.us>\n>> No. You'd have to be superuser anyway to do that, and we're not in the\n>> habit of trying to put training wheels on superusers.\n\n> Understood. However, we may add the parallel safety member in fmgr_builtins[] in another thread for parallel INSERT SELECT. I'd appreciate your comment on this if you see any concern.\n\n[ raised eyebrow... ] I find it very hard to understand why that would\nbe necessary, or even a good idea. Not least because there's no spare\nroom there; you'd have to incur a substantial enlargement of the\narray to add another flag. But also, that would indeed lock down\nthe value of the parallel-safety flag, and that seems like a fairly\nbad idea.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 20 Apr 2021 22:22:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "From: Tom Lane <tgl@sss.pgh.pa.us>\n> [ raised eyebrow... ] I find it very hard to understand why that would\n> be necessary, or even a good idea. Not least because there's no spare\n> room there; you'd have to incur a substantial enlargement of the\n> array to add another flag. But also, that would indeed lock down\n> the value of the parallel-safety flag, and that seems like a fairly\n> bad idea.\n\nYou're right, FmgrBuiltins is already fully packed (24 bytes on 64-bit machines). Enlarging the frequently accessed fmgr_builtins array may wreak unexpectedly large adverse effect on performance.\n\nI wanted to check the parallel safety of functions, which various objects (data type, index, trigger, etc.) come down to, in FunctionCallInvoke() and other few places. But maybe we skip the check for built-in functions. That's a matter of where we draw a line between where we check and where we don't.\n\n\nRegards\nTakayuki Tsunakawa\n\t\n\n\n\n", "msg_date": "Wed, 21 Apr 2021 02:41:58 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "> I think we've found a few existing problems with handling the parallel safety of\n> functions while doing an experiment. Could I hear your opinions on what we\n> should do? I'd be willing to create and submit a patch to fix them.\n> \n> The experiment is to add a parallel safety check in FunctionCallInvoke() and run\n> the regression test with force_parallel_mode=regress. The added check\n> errors out with ereport(ERROR) when the about-to-be-called function is\n> parallel unsafe and the process is currently in parallel mode. 6 test cases failed\n> because the following parallel-unsafe functions were called:\n> \n> dsnowball_init\n> balkifnull\n> int44out\n> text_w_default_out\n> widget_out\n> \n> The first function is created in src/backend/snowball/snowball_create.sql for\n> full text search. The remaining functions are created during the regression\n> test run.\n> \n> (1)\n> All the above functions are actually parallel safe looking at their\n> implementations. It seems that their CREATE FUNCTION statements are just\n> missing PARALLEL SAFE specifications, so I think I'll add them.\n> dsnowball_lexize() may also be parallel safe.\n\nI agree that it's better to mark the function with correct parallel safety lable.\nEspecially for the above functions which will be executed in parallel mode.\nIt will be friendly to developer and user who is working on something related to parallel test.\n\nSo, I attached the patch to mark the above functions parallel safe.\n\nBest regards,\nhouzj", "msg_date": "Wed, 21 Apr 2021 08:09:25 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Wed, Apr 21, 2021 at 8:12 AM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Tom Lane <tgl@sss.pgh.pa.us>\n> > [ raised eyebrow... ] I find it very hard to understand why that would\n> > be necessary, or even a good idea. Not least because there's no spare\n> > room there; you'd have to incur a substantial enlargement of the\n> > array to add another flag. But also, that would indeed lock down\n> > the value of the parallel-safety flag, and that seems like a fairly\n> > bad idea.\n>\n> You're right, FmgrBuiltins is already fully packed (24 bytes on 64-bit machines). Enlarging the frequently accessed fmgr_builtins array may wreak unexpectedly large adverse effect on performance.\n>\n> I wanted to check the parallel safety of functions, which various objects (data type, index, trigger, etc.) come down to, in FunctionCallInvoke() and other few places. But maybe we skip the check for built-in functions. That's a matter of where we draw a line between where we check and where we don't.\n>\n\nIIUC, the idea here is to check for parallel safety of functions at\nsomeplace in the code during function invocation so that if we execute\nany parallel unsafe/restricted function via parallel worker then we\nerror out. If so, isn't it possible to deal with built-in and\nnon-built-in functions in the same way?\n\nI think we want to have some safety checks for functions as we have\nfor transaction id in AssignTransactionId(), command id in\nCommandCounterIncrement(), for write operations in\nheap_prepare_insert(), etc. Is that correct?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 21 Apr 2021 15:39:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Wed, Apr 21, 2021 at 8:12 AM tsunakawa.takay@fujitsu.com\n> <tsunakawa.takay@fujitsu.com> wrote:\n>> From: Tom Lane <tgl@sss.pgh.pa.us>\n>>> [ raised eyebrow... ] I find it very hard to understand why that would\n>>> be necessary, or even a good idea.\n\n> IIUC, the idea here is to check for parallel safety of functions at\n> someplace in the code during function invocation so that if we execute\n> any parallel unsafe/restricted function via parallel worker then we\n> error out. If so, isn't it possible to deal with built-in and\n> non-built-in functions in the same way?\n\nYeah, one of the reasons I doubt this is a great idea is that you'd\nstill have to fetch the pg_proc row for non-built-in functions.\n\nThe obvious place to install such a check is fmgr_info(), which is\nfetching said row anyway for other purposes, so it's really hard to\nsee how adding anything to FmgrBuiltin is going to help.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 21 Apr 2021 09:34:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "From: Tom Lane <tgl@sss.pgh.pa.us>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > IIUC, the idea here is to check for parallel safety of functions at\n> > someplace in the code during function invocation so that if we execute\n> > any parallel unsafe/restricted function via parallel worker then we\n> > error out. If so, isn't it possible to deal with built-in and\n> > non-built-in functions in the same way?\n> \n> Yeah, one of the reasons I doubt this is a great idea is that you'd\n> still have to fetch the pg_proc row for non-built-in functions.\n> \n> The obvious place to install such a check is fmgr_info(), which is\n> fetching said row anyway for other purposes, so it's really hard to\n> see how adding anything to FmgrBuiltin is going to help.\n\nThank you, fmgr_info() looks like the best place to do the parallel safety check. Having a quick look at its callers, I didn't find any concerning place (of course, we can't be relieved until the regression test succeeds.) Also, with fmgr_info(), we don't have to find other places to add the check to deal with functions calls in execExpr.c and execExprInterp.c. This is beautiful.\n\nBut the current fmgr_info() does not check the parallel safety of builtin functions. It does not have information to do that. There are two options. Which do you think is better? I think 2.\n\n1) fmgr_info() reads pg_proc like for non-builtin functions\nThis ruins the effort for the fast path for builtin functions. I can't imagine how large the adverse impact on performance would be, but I'm worried.\n\nThe benefit is that ALTER FUNCTION on builtin functions takes effect. But such operations are nonsensical, so I don't think we want to gain such a benefit.\n\n\n2) Gen_fmgrtab.pl adds a member for proparallel in FmgrBuiltin\nBut we don't want to enlarge FmgrBuiltin struct. So, change the existing bool members strict and and retset into one member of type char, and represent the original values with some bit flags. Then we use that member for proparallel as well. (As a result, one byte is left for future use.)\n\n\nI think we'll try 2). I'd be grateful if you could point out anything I need to be careful to.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n", "msg_date": "Thu, 22 Apr 2021 06:40:19 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "From: Hou, Zhijie/侯 志杰 <houzj.fnst@fujitsu.com>\n> I agree that it's better to mark the function with correct parallel safety lable.\n> Especially for the above functions which will be executed in parallel mode.\n> It will be friendly to developer and user who is working on something related to\n> parallel test.\n> \n> So, I attached the patch to mark the above functions parallel safe.\n\nThank you, the patch looks good. Please register it with the next CF if not yet.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n", "msg_date": "Thu, 22 Apr 2021 07:27:28 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "\n> Thank you, fmgr_info() looks like the best place to do the parallel safety check.\n> Having a quick look at its callers, I didn't find any concerning place (of course,\n> we can't be relieved until the regression test succeeds.) Also, with fmgr_info(),\n> we don't have to find other places to add the check to deal with functions calls\n> in execExpr.c and execExprInterp.c. This is beautiful.\n> \n> But the current fmgr_info() does not check the parallel safety of builtin\n> functions. It does not have information to do that. There are two options.\n> Which do you think is better? I think 2.\n> \n> 1) fmgr_info() reads pg_proc like for non-builtin functions This ruins the effort\n> for the fast path for builtin functions. I can't imagine how large the adverse\n> impact on performance would be, but I'm worried.\n\nFor approach 1): I think it could result in infinite recursion.\n\nFor example:\nIf we first access one built-in function A which have not been cached, \nit need access the pg_proc, When accessing the pg_proc, it internally still need some built-in function B to scan.\nAt this time, if B is not cached , it still need to fetch function B's parallel flag by accessing the pg_proc.proparallel.\nThen it could result in infinite recursion. \n\nSo, I think we can consider the approach 2)\n\nBest regards,\nhouzj\n\n\n", "msg_date": "Thu, 22 Apr 2021 09:08:49 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "From: Hou, Zhijie/侯 志杰 <houzj.fnst@fujitsu.com>\n> For approach 1): I think it could result in infinite recursion.\n> \n> For example:\n> If we first access one built-in function A which have not been cached,\n> it need access the pg_proc, When accessing the pg_proc, it internally still need\n> some built-in function B to scan.\n> At this time, if B is not cached , it still need to fetch function B's parallel flag by\n> accessing the pg_proc.proparallel.\n> Then it could result in infinite recursion.\n> \n> So, I think we can consider the approach 2)\n\nHmm, that makes sense. That's a problem structure similar to that of relcache. Only one choice is left already, unless there's another better idea.\n\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n", "msg_date": "Fri, 23 Apr 2021 01:39:08 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Wed, Apr 21, 2021 at 12:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> writes:\n> > From: Tom Lane <tgl@sss.pgh.pa.us>\n> >> No. You'd have to be superuser anyway to do that, and we're not in the\n> >> habit of trying to put training wheels on superusers.\n>\n> > Understood. However, we may add the parallel safety member in fmgr_builtins[] in another thread for parallel INSERT SELECT. I'd appreciate your comment on this if you see any concern.\n>\n> [ raised eyebrow... ] I find it very hard to understand why that would\n> be necessary, or even a good idea. Not least because there's no spare\n> room there; you'd have to incur a substantial enlargement of the\n> array to add another flag. But also, that would indeed lock down\n> the value of the parallel-safety flag, and that seems like a fairly\n> bad idea.\n>\n\nI'm curious. The FmgrBuiltin struct includes the \"strict\" flag, so\nthat would \"lock down the value\" of the strict flag, wouldn't it?\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Fri, 23 Apr 2021 17:37:20 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Wed, Apr 21, 2021 at 7:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Wed, Apr 21, 2021 at 8:12 AM tsunakawa.takay@fujitsu.com\n> > <tsunakawa.takay@fujitsu.com> wrote:\n> >> From: Tom Lane <tgl@sss.pgh.pa.us>\n> >>> [ raised eyebrow... ] I find it very hard to understand why that would\n> >>> be necessary, or even a good idea.\n>\n> > IIUC, the idea here is to check for parallel safety of functions at\n> > someplace in the code during function invocation so that if we execute\n> > any parallel unsafe/restricted function via parallel worker then we\n> > error out. If so, isn't it possible to deal with built-in and\n> > non-built-in functions in the same way?\n>\n> Yeah, one of the reasons I doubt this is a great idea is that you'd\n> still have to fetch the pg_proc row for non-built-in functions.\n>\n\nSo, are you suggesting that we should fetch the pg_proc row for\nbuilt-in functions as well for this purpose? If not, then how to\nidentify parallel safety of built-in functions in fmgr_info()?\n\nAnother idea could be that we check parallel safety of built-in\nfunctions based on some static information. As we know the func_ids of\nnon-parallel-safe built-in functions, we can have a function\nfmgr_builtin_parallel_safe() which check if the func_id is not one\namong the predefined func_ids of non-parallel-safe built-in functions,\nit returns true, otherwise, false. Then, we can call this new function\nin fmgr_info for built-in functions.\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 23 Apr 2021 15:32:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "Greg Nancarrow <gregn4422@gmail.com> writes:\n> I'm curious. The FmgrBuiltin struct includes the \"strict\" flag, so\n> that would \"lock down the value\" of the strict flag, wouldn't it?\n\nIt does, but that's much more directly a property of the function's\nC code than parallel-safety is.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Apr 2021 09:15:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Fri, Apr 23, 2021 at 9:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Greg Nancarrow <gregn4422@gmail.com> writes:\n> > I'm curious. The FmgrBuiltin struct includes the \"strict\" flag, so\n> > that would \"lock down the value\" of the strict flag, wouldn't it?\n>\n> It does, but that's much more directly a property of the function's\n> C code than parallel-safety is.\n\nI'm not sure I agree with that, but I think having the \"strict\" flag\nin FmgrBuiltin isn't that nice either.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 23 Apr 2021 11:50:17 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Apr 23, 2021 at 9:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Greg Nancarrow <gregn4422@gmail.com> writes:\n>>> I'm curious. The FmgrBuiltin struct includes the \"strict\" flag, so\n>>> that would \"lock down the value\" of the strict flag, wouldn't it?\n\n>> It does, but that's much more directly a property of the function's\n>> C code than parallel-safety is.\n\n> I'm not sure I agree with that, but I think having the \"strict\" flag\n> in FmgrBuiltin isn't that nice either.\n\nYeah, if we could readily do without it, we probably would. But the\nfunction call mechanism itself is responsible for implementing strictness,\nso it *has* to have that flag available.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Apr 2021 11:56:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Fri, Apr 23, 2021 at 6:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Greg Nancarrow <gregn4422@gmail.com> writes:\n> > I'm curious. The FmgrBuiltin struct includes the \"strict\" flag, so\n> > that would \"lock down the value\" of the strict flag, wouldn't it?\n>\n> It does, but that's much more directly a property of the function's\n> C code than parallel-safety is.\n>\n\nIsn't parallel safety also the C code property? I mean unless someone\nchanges the built-in function code, changing that property would be\ndangerous. The other thing is even if a user is allowed to change one\nfunction's property, how will they know which other functions are\ncalled by that function and whether they are parallel-safe or not. For\nexample, say if the user wants to change the parallel safe property of\na built-in function brin_summarize_new_values, unless she changes its\ncode and the functions called by it like brin_summarize_range, it\nwould be dangerous. So, isn't it better to disallow changing parallel\nsafety for built-in functions?\n\nAlso, if the strict property of built-in functions is fixed\ninternally, why we allow users to change it and is that of any help?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 24 Apr 2021 08:23:25 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Sat, Apr 24, 2021 at 12:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Apr 23, 2021 at 6:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Greg Nancarrow <gregn4422@gmail.com> writes:\n> > > I'm curious. The FmgrBuiltin struct includes the \"strict\" flag, so\n> > > that would \"lock down the value\" of the strict flag, wouldn't it?\n> >\n> > It does, but that's much more directly a property of the function's\n> > C code than parallel-safety is.\n> >\n>\n> Isn't parallel safety also the C code property? I mean unless someone\n> changes the built-in function code, changing that property would be\n> dangerous. The other thing is even if a user is allowed to change one\n> function's property, how will they know which other functions are\n> called by that function and whether they are parallel-safe or not. For\n> example, say if the user wants to change the parallel safe property of\n> a built-in function brin_summarize_new_values, unless she changes its\n> code and the functions called by it like brin_summarize_range, it\n> would be dangerous. So, isn't it better to disallow changing parallel\n> safety for built-in functions?\n>\n> Also, if the strict property of built-in functions is fixed\n> internally, why we allow users to change it and is that of any help?\n>\n\nYes, I'd like to know too.\nI think it would make more sense to disallow changing properties like\nstrict/parallel-safety on built-in functions.\nAlso, with sufficient privileges, a built-in function can be\nredefined, yet the original function (whose info is cached in\nFmgrBuiltins[], from build-time) is always invoked, not the\nnewly-defined version.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Tue, 27 Apr 2021 14:08:59 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "> >>> I'm curious. The FmgrBuiltin struct includes the \"strict\" flag, so\n> >>> that would \"lock down the value\" of the strict flag, wouldn't it?\n> \n> >> It does, but that's much more directly a property of the function's C\n> >> code than parallel-safety is.\n> \n> > I'm not sure I agree with that, but I think having the \"strict\" flag\n> > in FmgrBuiltin isn't that nice either.\n> \n> Yeah, if we could readily do without it, we probably would. But the function\n> call mechanism itself is responsible for implementing strictness, so it *has* to\n> have that flag available.\n\nSo, If we do not want to lock down the parallel safety of built-in functions.\nIt seems we can try to fetch the proparallel from pg_proc for built-in function\nin fmgr_info_cxt_security too. To avoid recursive safety check when fetching\nproparallel from pg_proc, we can add a Global variable to mark is it in a recursive state.\nAnd we skip safety check in a recursive state, In this approach, parallel safety\nwill not be locked, and there are no new members in FmgrBuiltin.\n\nAttaching the patch about this approach [0001-approach-1].\nThoughts ?\n\nI also attached another approach patch [0001-approach-2] about adding\nparallel safety in FmgrBuiltin, because this approach seems faster and\nwe can combine some bool member into a bitflag to avoid enlarging the\nFmgrBuiltin array, though this approach will lock down the parallel safety\nof built-in function.\n\nBest regards,\nhouzj", "msg_date": "Thu, 29 Apr 2021 01:42:18 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Fri, Apr 23, 2021 at 10:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Isn't parallel safety also the C code property?\n\nIn my opinion, yes.\n\n> So, isn't it better to disallow changing parallel\n> safety for built-in functions?\n\nSuperusers can do a lot of DML operations on the system catalogs that\nare manifestly unsafe. I think we should really consider locking that\ndown somehow, but I doubt it makes sense to treat this case separately\nfrom all the others. What do you think will happen if you change\nproargtypes?\n\n> Also, if the strict property of built-in functions is fixed\n> internally, why we allow users to change it and is that of any help?\n\nOne real application of allowing these sorts of changes is letting\nusers correct things that were done wrong originally without waiting\nfor a new major release.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 4 May 2021 15:09:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Wed, Apr 28, 2021 at 9:42 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n> So, If we do not want to lock down the parallel safety of built-in functions.\n> It seems we can try to fetch the proparallel from pg_proc for built-in function\n> in fmgr_info_cxt_security too. To avoid recursive safety check when fetching\n> proparallel from pg_proc, we can add a Global variable to mark is it in a recursive state.\n> And we skip safety check in a recursive state, In this approach, parallel safety\n> will not be locked, and there are no new members in FmgrBuiltin.\n>\n> Attaching the patch about this approach [0001-approach-1].\n> Thoughts ?\n\nThis seems to be full of complicated if-tests that don't seem\nnecessary and aren't explained by the comments. Also, introducing a\nsystem cache lookup here seems completely unacceptable from a\nreliability point of view, and I bet it's not too good for\nperformance, either.\n\n> I also attached another approach patch [0001-approach-2] about adding\n> parallel safety in FmgrBuiltin, because this approach seems faster and\n> we can combine some bool member into a bitflag to avoid enlarging the\n> FmgrBuiltin array, though this approach will lock down the parallel safety\n> of built-in function.\n\nThis doesn't seem like a good idea either.\n\nI really don't understand what problem any of this is intended to\nsolve. Bharath's analysis above seems right on point to me. I think if\nanybody is writing a patch that requires that this be changed in this\nway, that person is probably doing something wrong.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 4 May 2021 15:18:36 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Wed, May 5, 2021 at 5:09 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Apr 23, 2021 at 10:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Isn't parallel safety also the C code property?\n>\n> > Also, if the strict property of built-in functions is fixed\n> > internally, why we allow users to change it and is that of any help?\n>\n> One real application of allowing these sorts of changes is letting\n> users correct things that were done wrong originally without waiting\n> for a new major release.\n>\n\nProblem is, for built-in functions, the changes are allowed, but for\nsome properties (like strict) the allowed changes don't actually take\neffect (this is what Amit was referring to - so why allow those\nchanges?).\nIt's because some of the function properties are cached in\nFmgrBuiltins[] (for a \"fast-path\" lookup for built-ins), according to\ntheir state at build time (from pg_proc.dat), but ALTER FUNCTION is\njust changing it in the system catalogs. Also, with sufficient\nprivileges, a built-in function can be redefined, yet the original\nfunction (whose info is cached in FmgrBuiltins[]) is always invoked,\nnot the newly-defined version.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Wed, 5 May 2021 13:47:25 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Tue, May 4, 2021 at 11:47 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> Problem is, for built-in functions, the changes are allowed, but for\n> some properties (like strict) the allowed changes don't actually take\n> effect (this is what Amit was referring to - so why allow those\n> changes?).\n> It's because some of the function properties are cached in\n> FmgrBuiltins[] (for a \"fast-path\" lookup for built-ins), according to\n> their state at build time (from pg_proc.dat), but ALTER FUNCTION is\n> just changing it in the system catalogs. Also, with sufficient\n> privileges, a built-in function can be redefined, yet the original\n> function (whose info is cached in FmgrBuiltins[]) is always invoked,\n> not the newly-defined version.\n\nI agree. I think that's not ideal. I think we should consider putting\nsome more restrictions on updating system catalog changes, and I also\nthink that if we can get out of having strict need to be part of\nFmgrBuiltins[] that would be good. But what I don't agree with is the\nidea that since strict already has this problem, it's OK to do the\nsame thing with parallel-safety. That seems to me to be making a bad\nsituation worse, and I can't see what problem it actually solves.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 May 2021 10:09:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "From: Robert Haas <robertmhaas@gmail.com>\r\n> On Tue, May 4, 2021 at 11:47 PM Greg Nancarrow <gregn4422@gmail.com>\r\n> wrote:\r\n> > Problem is, for built-in functions, the changes are allowed, but for\r\n> > some properties (like strict) the allowed changes don't actually take\r\n> > effect (this is what Amit was referring to - so why allow those\r\n> > changes?).\r\n> > It's because some of the function properties are cached in\r\n> > FmgrBuiltins[] (for a \"fast-path\" lookup for built-ins), according to\r\n> > their state at build time (from pg_proc.dat), but ALTER FUNCTION is\r\n> > just changing it in the system catalogs. Also, with sufficient\r\n> > privileges, a built-in function can be redefined, yet the original\r\n> > function (whose info is cached in FmgrBuiltins[]) is always invoked,\r\n> > not the newly-defined version.\r\n> \r\n> I agree. I think that's not ideal. I think we should consider putting\r\n> some more restrictions on updating system catalog changes, and I also\r\n> think that if we can get out of having strict need to be part of\r\n> FmgrBuiltins[] that would be good. But what I don't agree with is the\r\n> idea that since strict already has this problem, it's OK to do the\r\n> same thing with parallel-safety. That seems to me to be making a bad\r\n> situation worse, and I can't see what problem it actually solves.\r\n\r\nLet me divide the points:\r\n\r\n\r\n(1) Is it better to get hardcoded function properties out of fmgr_builtins[]?\r\nIt's little worth doing so or thinking about that. It's no business for users to change system objects, in this case system functions.\r\n\r\nAlso, hardcoding is a worthwhile strategy for good performance or other inevitable reasons. Postgres is using it as in the system catalog relcache below.\r\n\r\n[relcache.c]\r\n/*\r\n * hardcoded tuple descriptors, contents generated by genbki.pl\r\n */\r\nstatic const FormData_pg_attribute Desc_pg_class[Natts_pg_class] = {Schema_pg_class};\r\nstatic const FormData_pg_attribute Desc_pg_attribute[Natts_pg_attribute] = {Schema_pg_attribute};\r\n...\r\n\r\n\r\n(2) Should it be disallowed for users to change system function properties with ALTER FUNCTION?\r\nMaybe yes, but it's not an important issue for achieving parallel INSERT SELECT at the moment. So, I think this can be discussed in an independent separate thread.\r\n\r\nAs a reminder, Postgres have safeguards against modifying system objects as follows.\r\n\r\ntest=# drop table^C\r\ntest=# drop function pg_wal_replay_pause();\r\nERROR: cannot drop function pg_wal_replay_pause() because it is required by the database system\r\ntest=# drop table pg_largeobject;\r\nERROR: permission denied: \"pg_largeobject\" is a system catalog\r\n\r\nOTOH, Postgres doesn't disallow changing the system table column values directly, such as UPDATE pg_proc SET .... But it's warned in the manual that such operations are dangerous. So, we don't have to care about it.\r\n\r\nChapter 52. System Catalogs\r\nhttps://www.postgresql.org/docs/devel/catalogs.html\r\n\r\n\"You can drop and recreate the tables, add columns, insert and update values, and severely mess up your system that way. Normally, one should not change the system catalogs by hand, there are normally SQL commands to do that. (For example, CREATE DATABASE inserts a row into the pg_database catalog — and actually creates the database on disk.) There are some exceptions for particularly esoteric operations, but many of those have been made available as SQL commands over time, and so the need for direct manipulation of the system catalogs is ever decreasing.\"\r\n\r\n\r\n(3) Why do we want to have parallel-safety in fmgr_builtins[]?\r\nAs proposed in this thread and/or \"Parallel INSERT SELECT take 2\", we thought of detecting parallel unsafe function execution during SQL statement execution, instead of imposing much overhead to check parallel safety during query planning. Specifically, we add parallel safety check in fmgr_info() and/or FunctionCallInvoke().\r\n\r\n(Alternatively, I think we can conclude that we assume parallel unsafe built-in functions won't be used in parallel DML. In that case, we don't change FmgrBuiltin and we just skip the parallel safety check for built-in functions when the function is called. Would you buy this?)\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n", "msg_date": "Thu, 6 May 2021 02:54:10 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Wed, May 5, 2021 at 7:39 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, May 4, 2021 at 11:47 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > Problem is, for built-in functions, the changes are allowed, but for\n> > some properties (like strict) the allowed changes don't actually take\n> > effect (this is what Amit was referring to - so why allow those\n> > changes?).\n> > It's because some of the function properties are cached in\n> > FmgrBuiltins[] (for a \"fast-path\" lookup for built-ins), according to\n> > their state at build time (from pg_proc.dat), but ALTER FUNCTION is\n> > just changing it in the system catalogs. Also, with sufficient\n> > privileges, a built-in function can be redefined, yet the original\n> > function (whose info is cached in FmgrBuiltins[]) is always invoked,\n> > not the newly-defined version.\n>\n> I agree. I think that's not ideal. I think we should consider putting\n> some more restrictions on updating system catalog changes, and I also\n> think that if we can get out of having strict need to be part of\n> FmgrBuiltins[] that would be good. But what I don't agree with is the\n> idea that since strict already has this problem, it's OK to do the\n> same thing with parallel-safety. That seems to me to be making a bad\n> situation worse, and I can't see what problem it actually solves.\n>\n\nThe idea here is to check for parallel safety of functions at\nsomeplace in the code during function invocation so that if we execute\nany parallel unsafe/restricted function via parallel worker then we\nerror out. I think that is a good safety net especially if we can do\nit with some simple check. Now, we already have pg_proc information in\nfmgr_info_cxt_security for non-built-in functions, so we can check\nthat and error out if the unsafe function is invoked in parallel mode.\nIt has been observed that we were calling some unsafe functions in\nparallel-mode in the regression tests which is caught by such a check.\n\nI think here the main challenge is to do a similar check for built-in\nfunctions and one of the ideas to do that was to extend FmgrBuiltins\nto cache that information. I see why that idea is not good and maybe\nwe can see if there is some other place where we already fetch pg_proc\nfor built-in functions and can we have such a check at that place? If\nthat is not feasible then we can probably have such a check just for\nnon-built-in functions as that seems straightforward.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 6 May 2021 12:30:20 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "From: Robert Haas <robertmhaas@gmail.com>\r\n> On Wed, Apr 28, 2021 at 9:42 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> > So, If we do not want to lock down the parallel safety of built-in functions.\r\n> > It seems we can try to fetch the proparallel from pg_proc for built-in function\r\n> > in fmgr_info_cxt_security too. To avoid recursive safety check when fetching\r\n> > proparallel from pg_proc, we can add a Global variable to mark is it in a\r\n> recursive state.\r\n> > And we skip safety check in a recursive state, In this approach, parallel safety\r\n> > will not be locked, and there are no new members in FmgrBuiltin.\r\n> >\r\n> > Attaching the patch about this approach [0001-approach-1].\r\n> > Thoughts ?\r\n> \r\n> This seems to be full of complicated if-tests that don't seem\r\n> necessary and aren't explained by the comments. Also, introducing a\r\n> system cache lookup here seems completely unacceptable from a\r\n> reliability point of view, and I bet it's not too good for\r\n> performance, either.\r\n\r\nAgreed. Also, PG_TRY() would be relatively heavyweight here. I'm inclined to avoid this approach.\r\n\r\n\r\n> > I also attached another approach patch [0001-approach-2] about adding\r\n> > parallel safety in FmgrBuiltin, because this approach seems faster and\r\n> > we can combine some bool member into a bitflag to avoid enlarging the\r\n> > FmgrBuiltin array, though this approach will lock down the parallel safety\r\n> > of built-in function.\r\n> \r\n> This doesn't seem like a good idea either.\r\n\r\nThis looks good to me. What makes you think so?\r\n\r\nThat said, I actually think we want to avoid even this change. That is, I'm wondering if we can skip the parallel safety of built-in functions.\r\n\r\nCan anyone think of the need to check the parallel safety of built-in functions in the context of parallel INSERT SELECT? The planner already checks (or can check) the parallel safety of the SELECT part with max_parallel_hazard(). Regarding the INSERT part, we're trying to rely on the parallel safety of the target table that the user specified with CREATE/ALTER TABLE. I don't see where we need to check the parallel safety of uilt-in functions.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Thu, 6 May 2021 07:26:29 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Wed, May 5, 2021 at 10:54 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n> (1) Is it better to get hardcoded function properties out of fmgr_builtins[]?\n> It's little worth doing so or thinking about that. It's no business for users to change system objects, in this case system functions.\n\nI don't entirely agree with this. Whether or not users have any\nbusiness changing system functions, it's better to have one source of\ntruth than two. Now that being said, this is not a super-important\nproblem for us to go solve, and hard-coding a certain amount of stuff\nis probably necessary to allow the system to bootstrap itself. So for\nme it's one of those things that is in.a grey area: if someone showed\nup with a patch to make it better, I'd be happy. But I probably\nwouldn't spend much time on writing such a patch unless it solved some\nother problem that I cared about.\n\n> (3) Why do we want to have parallel-safety in fmgr_builtins[]?\n> As proposed in this thread and/or \"Parallel INSERT SELECT take 2\", we thought of detecting parallel unsafe function execution during SQL statement execution, instead of imposing much overhead to check parallel safety during query planning. Specifically, we add parallel safety check in fmgr_info() and/or FunctionCallInvoke().\n\nI haven't read that thread, but I don't understand how that can work.\nThe reason we need to detect it at plan time is because we might need\nto use a different plan. At execution time it's too late for that.\n\nAlso, it seems potentially quite expensive. A query may be planned\nonce and executed many times. Also, a single query execution may call\nthe same SQL function many times. I think we don't want to incur the\noverhead of an extra syscache lookup every time anyone calls any\nfunction. A very simple expression like a+b+c+d+e involves four\nfunction calls, and + need not be a built-in, if the data type is\nuser-defined. And that might be happening for every row in a table\nwith millions of rows.\n\n> (Alternatively, I think we can conclude that we assume parallel unsafe built-in functions won't be used in parallel DML. In that case, we don't change FmgrBuiltin and we just skip the parallel safety check for built-in functions when the function is called. Would you buy this?)\n\nI don't really understand this idea. There's no such thing as parallel\nDML, is there? There's just DML, which we must to decide whether can\nbe done in parallel or not based on, among other things, the\nparallel-safety markings of the functions it contains. Maybe I am not\nunderstanding you correctly, but it seems like you're suggesting that\nin some cases we can just assume that the user hasn't done something\nparallel-unsafe without making any attempt to check it. I don't think\nI could support that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 May 2021 06:46:56 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Thu, May 6, 2021 at 3:00 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> The idea here is to check for parallel safety of functions at\n> someplace in the code during function invocation so that if we execute\n> any parallel unsafe/restricted function via parallel worker then we\n> error out. I think that is a good safety net especially if we can do\n> it with some simple check. Now, we already have pg_proc information in\n> fmgr_info_cxt_security for non-built-in functions, so we can check\n> that and error out if the unsafe function is invoked in parallel mode.\n> It has been observed that we were calling some unsafe functions in\n> parallel-mode in the regression tests which is caught by such a check.\n\nI see your point, but I am not convinced. As I said to Tsunakawa-san,\ndoing the check here seems expensive. Also, I had the idea in mind\nthat parallel-safety should work like volatility. We don't check at\nruntime whether a volatile function is being called in a context where\nvolatile functions are not supposed to be used. If for example you try\nto call a volatile function directly from an index expression I\nbelieve you will get an error. But if the index expression calls an\nimmutable function and then that function internally calls something\nvolatile, you don't get an error. Now it might not be a good idea: you\ncould end up with a broken index. But that's your fault for\nmislabeling the function you used.\n\nSometimes this is actually quite useful. You might know that, while\nthe function is in general volatile, it is immutable in the particular\nway that you are using it. Or, perhaps, you are using the volatile\nfunction incidentally and it doesn't affect the output of your\nfunction at all. Or, maybe you actually want to build an index that\nmight break, and then it's up to you to rebuild the index if and when\nthat is required. Users do this kind of thing all the time, I think,\nand would be unhappy if we started checking it more rigorously than we\ndo today.\n\nNow, I don't see why the same idea can't or shouldn't apply to\nparallel-safety. If you call a parallel-unsafe function in a parallel\ncontext, it's pretty likely that you are going to get an error, and so\nyou might not want to do it. If the function is written in C, it could\neven cause horrible things to happen so that you crash the whole\nbackend or something, but I tried to set things up so that for\nbuilt-in functions you'll just get an error. But on the other hand,\nmaybe the parallel-unsafe function you are calling is not\nparallel-unsafe in all cases. If you want to create a wrapper function\nthat is labelled parallel-safe and try to make that it only calls the\nparallel-unsafe function in the cases where there's no safety problem,\nthat's up to you!\n\nIt's possible that I had the wrong idea here, so maybe the question\ndeserves more thought, but I wanted to explain what my thought process\nwas.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 May 2021 07:05:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Thu, May 6, 2021 at 5:26 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> Can anyone think of the need to check the parallel safety of built-in functions in the context of parallel INSERT SELECT? The planner already checks (or can check) the parallel safety of the SELECT part with max_parallel_hazard(). Regarding the INSERT part, we're trying to rely on the parallel safety of the target table that the user specified with CREATE/ALTER TABLE. I don't see where we need to check the parallel safety of uilt-in functions.\n>\n\nYes, I certainly can think of a reason to do this.\nThe idea is, for the approach being discussed, is to allow the user to\ndeclare parallel-safety on a table, but then to catch any possible\nviolations of this at runtime (as opposed to adding additional\nparallel-safety checks at planning time).\nSo for INSERT with parallel SELECT for example (which runs in\nparallel-mode), then the execution of index expressions,\ncolumn-default expressions, check constraints etc. may end up invoking\nfunctions (built-in or otherwise) that are NOT parallel-safe - so we\ncould choose to error-out in this case when these violations are\ndetected.\nAs far as I can see, this checking of function parallel-safety can be\ndone with little overhead to the current code - it already gets proc\ninformation from the system cache for non-built-in-functions, and for\nbuilt-in functions it could store the parallel-safety status in\nFmgrBuiltin and simply get it from there (I don't think we should be\nallowing changes to built-in function properties - currently it is\nallowed, but it doesn't work properly).\nThe other option is to just blindly trust the parallel-safety\ndeclaration on tables and whatever happens at runtime happens.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Thu, 6 May 2021 21:53:32 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Thu, May 6, 2021 at 4:35 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, May 6, 2021 at 3:00 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > The idea here is to check for parallel safety of functions at\n> > someplace in the code during function invocation so that if we execute\n> > any parallel unsafe/restricted function via parallel worker then we\n> > error out. I think that is a good safety net especially if we can do\n> > it with some simple check. Now, we already have pg_proc information in\n> > fmgr_info_cxt_security for non-built-in functions, so we can check\n> > that and error out if the unsafe function is invoked in parallel mode.\n> > It has been observed that we were calling some unsafe functions in\n> > parallel-mode in the regression tests which is caught by such a check.\n>\n> I see your point, but I am not convinced. As I said to Tsunakawa-san,\n> doing the check here seems expensive.\n>\n\nIf I read your email correctly then you are saying it is expensive\nbased on the idea that we need to perform extra syscache lookup but\nactually for non-built-in functions, we already have parallel-safety\ninformation so such a check should not incur a significant cost.\n\n> Also, I had the idea in mind\n> that parallel-safety should work like volatility. We don't check at\n> runtime whether a volatile function is being called in a context where\n> volatile functions are not supposed to be used. If for example you try\n> to call a volatile function directly from an index expression I\n> believe you will get an error. But if the index expression calls an\n> immutable function and then that function internally calls something\n> volatile, you don't get an error. Now it might not be a good idea: you\n> could end up with a broken index. But that's your fault for\n> mislabeling the function you used.\n>\n> Sometimes this is actually quite useful. You might know that, while\n> the function is in general volatile, it is immutable in the particular\n> way that you are using it. Or, perhaps, you are using the volatile\n> function incidentally and it doesn't affect the output of your\n> function at all. Or, maybe you actually want to build an index that\n> might break, and then it's up to you to rebuild the index if and when\n> that is required. Users do this kind of thing all the time, I think,\n> and would be unhappy if we started checking it more rigorously than we\n> do today.\n>\n> Now, I don't see why the same idea can't or shouldn't apply to\n> parallel-safety. If you call a parallel-unsafe function in a parallel\n> context, it's pretty likely that you are going to get an error, and so\n> you might not want to do it. If the function is written in C, it could\n> even cause horrible things to happen so that you crash the whole\n> backend or something, but I tried to set things up so that for\n> built-in functions you'll just get an error. But on the other hand,\n> maybe the parallel-unsafe function you are calling is not\n> parallel-unsafe in all cases. If you want to create a wrapper function\n> that is labelled parallel-safe and try to make that it only calls the\n> parallel-unsafe function in the cases where there's no safety problem,\n> that's up to you!\n>\n\nI think it is difficult to say for what purpose parallel-unsafe\nfunction got called in parallel context so if we give an error in\ncases where otherwise it could lead to a crash or caused other\nhorrible things, users will probably appreciate us. OTOH, if the\nparallel-safety labeling is wrong (parallel-safe function is marked\nparallel-unsafe) and we gave an error in such a case, the user can\nalways change the parallel-safety attribute by using Alter Function.\nNow, if adding such a check is costly or needs some major re-design\nthen probably it might not be worth whereas I don't think that is the\ncase for non-built-in function invocation.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 6 May 2021 19:00:33 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "From: Robert Haas <robertmhaas@gmail.com>\r\n> On Wed, May 5, 2021 at 10:54 PM tsunakawa.takay@fujitsu.com\r\n> <tsunakawa.takay@fujitsu.com> wrote:\r\n> > As proposed in this thread and/or \"Parallel INSERT SELECT take 2\", we\r\n> thought of detecting parallel unsafe function execution during SQL statement\r\n> execution, instead of imposing much overhead to check parallel safety during\r\n> query planning. Specifically, we add parallel safety check in fmgr_info()\r\n> and/or FunctionCallInvoke().\r\n> \r\n> I haven't read that thread, but I don't understand how that can work.\r\n> The reason we need to detect it at plan time is because we might need\r\n> to use a different plan. At execution time it's too late for that.\r\n\r\n(I forgot to say this in my previous email. Robert-san, thank you very much for taking time to look at this and giving feedback. It was sad that we had to revert our parallel INSERT SELECT for redesign at the very end of the last CF. We need advice and suggestions from knowledgeable and thoughtful people like Tom-san, Andres-san and you in early stages to not repeat the tragedy.)\r\n\r\nI'd really like you to have a look at the first mail in [1], and to get your feedback like \"this part should be like ... instead\" and \"this part would probably work, I think.\" Without feedback from leading developers, I'm somewhat at a loss if and how we can proceed with the proposed approach.\r\n\r\nTo put it shortly, we found that it can take non-negligible time for the planner to check the parallel safety of the target table of INSERT SELECT when it has many (hundreds or thousands of) partitions. The check also added much complicated code, too. So, we got inclined to take Tom-san's suggestion -- let the user specify the parallel safety of the target table with CREATE/ALTER TABLE and the planner just decides a query plan based on it. Caching the results of parallel safety checks in relcache or a new shared hash table didn't seem to work well to me, or it should be beyond my brain at least.\r\n\r\nWe may think that it's okay to just believe the user-specified parallel safety. But I thought we could step up and do our best to check the parallel safety during statement execution, if it's not very invasive in terms of performance and code complexity. The aforementioned idea is that if the parallel processes find the called functions parallel unsafe, they error out. All ancillary objects of the target table, data types, constraints, indexes, triggers, etc., come down to some UDF, so it should be enough to check the parallel safety when the UDF is called.\r\n\r\n\r\n> Also, it seems potentially quite expensive. A query may be planned\r\n> once and executed many times. Also, a single query execution may call\r\n> the same SQL function many times. I think we don't want to incur the\r\n> overhead of an extra syscache lookup every time anyone calls any\r\n> function. A very simple expression like a+b+c+d+e involves four\r\n> function calls, and + need not be a built-in, if the data type is\r\n> user-defined. And that might be happening for every row in a table\r\n> with millions of rows.\r\n\r\nWe (optimistically) expect that the overhead won't be serious, because the parallel safety information is already at hand in the FmgrInfo struct when the function is called. We don't have to look up the syscache every time the function is called.\r\n\r\nOf course, adding even a single if statement may lead to a disaster in a critical path, so we need to assess the performance. I'd also appreciate if you could suggest some good workload we should experiment in the thread above.\r\n\r\n\r\n\r\n[1]\r\nParallel INSERT SELECT take 2\r\nhttps://www.postgresql.org/message-id/TYAPR01MB29905A9AB82CC8BA50AB0F80FE709@TYAPR01MB2990.jpnprd01.prod.outlook.com\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Fri, 7 May 2021 06:21:59 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "> > Sometimes this is actually quite useful. You might know that, while\r\n> > the function is in general volatile, it is immutable in the particular\r\n> > way that you are using it. Or, perhaps, you are using the volatile\r\n> > function incidentally and it doesn't affect the output of your\r\n> > function at all. Or, maybe you actually want to build an index that\r\n> > might break, and then it's up to you to rebuild the index if and when\r\n> > that is required. Users do this kind of thing all the time, I think,\r\n> > and would be unhappy if we started checking it more rigorously than we\r\n> > do today.\r\n> >\r\n> > Now, I don't see why the same idea can't or shouldn't apply to\r\n> > parallel-safety. If you call a parallel-unsafe function in a parallel\r\n> > context, it's pretty likely that you are going to get an error, and so\r\n> > you might not want to do it. If the function is written in C, it could\r\n> > even cause horrible things to happen so that you crash the whole\r\n> > backend or something, but I tried to set things up so that for\r\n> > built-in functions you'll just get an error. But on the other hand,\r\n> > maybe the parallel-unsafe function you are calling is not\r\n> > parallel-unsafe in all cases. If you want to create a wrapper function\r\n> > that is labelled parallel-safe and try to make that it only calls the\r\n> > parallel-unsafe function in the cases where there's no safety problem,\r\n> > that's up to you!\r\n> >\r\n> \r\n> I think it is difficult to say for what purpose parallel-unsafe function got called in\r\n> parallel context so if we give an error in cases where otherwise it could lead to\r\n> a crash or caused other horrible things, users will probably appreciate us.\r\n> OTOH, if the parallel-safety labeling is wrong (parallel-safe function is marked\r\n> parallel-unsafe) and we gave an error in such a case, the user can always change\r\n> the parallel-safety attribute by using Alter Function.\r\n> Now, if adding such a check is costly or needs some major re-design then\r\n> probably it might not be worth whereas I don't think that is the case for\r\n> non-built-in function invocation.\r\n\r\nTemporarily, Just in case someone want to take a look at the patch for the safety check.\r\nI splited the patch into 0001(parallel safety check for user define function), 0003(parallel safety check for builtin function)\r\nand the fix for testcases.\r\n\r\nIMO, With such a check to give an error when detecting parallel unsafe function in parallel mode,\r\nit will be easier for users to discover potential threats(parallel unsafe function) in parallel mode.\r\n\r\nI think users is likely to invoke parallel unsafe function inner a parallel safe function unintentionally.\r\nSuch a check can help they detect the problem easier.\r\n\r\nAlthough, the strict check limits some usages(intentionally wrapper function) like Robert-san said.\r\nTo mitigate the effect of the limit, I was thinking can we do the safety check conditionally, such as only check the top function invoke and/or\r\nintroduce a guc option to control whether do the strict parallel safety check? Thoughts ?\r\n\r\nBest regards,\r\nhouzj", "msg_date": "Tue, 11 May 2021 06:58:17 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Tue, May 11, 2021 at 12:28 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Temporarily, Just in case someone want to take a look at the patch for the safety check.\n>\n\nI am not sure still there is a consensus on which cases exactly need\nto be dealt with. Let me try to summarize the discussion and see if\nthat helps. As per my understanding, the main reasons for this work\nare:\na. Ensure parallel-unsafe functions don't get executed in parallel\nmode. We do have checks to ensure that we don't select parallel-mode\nfor most cases where the parallel-unsafe function is used but we don't\nhave checks for input/output funcs, aggregate funcs, etc. This\nproposal is to detect such cases during function invocation and return\nan error. I think if, for some cases like aggregate or another type of\nfunctions we allow selecting parallelism relying on the user, it is\nnot a bad idea to detect and return an error if some parallel-unsafe\nfunction is executed in parallel mode.\nb. Detect wrong parallel-safety markings. Say the user has declared\nsome function as parallel-safe but it invokes another parallel-unsafe\nfunction.\nc. The other motive is that this work can help us to enable\nparallelism for inserts (and maybe update/delete in the future). As\nbeing discussed in another thread [1], we are considering allowing\nparallel inserts on a table based on user input and then at runtime\ndetect if the insert is invoking any parallel-unsafe expression. The\nidea is that the user will be able to specify whether a write\noperation is allowed in parallel on a specified relation and we allow\nto select parallelism for such writes based on that and do the checks\nfor Select as we are doing now. There are other options like determine\nthe parallel-safety of writes in a planner and then only allow\nparallelism but those all seem costly. Now, I think it is not\ncompulsory to have such checks for this particular reason as we are\nrelying on user input but it will be good if we have it.\n\nI think the last purpose (c) is still debatable even though we\ncouldn't come up with anything better till now but even if leave that\naside for now, I think the other reasons are good enough to have some\nform of checks.\n\nNow, the proposal being discussed is to add a parallel-safety check in\nfmgr_info which seems to be invoked during all function executions. We\nneed to have access to proparallel attribute of the function to check\nthe parallel-safety and that is readily available in fmgr_info for\nnon-built-in functions because we already get the pg_proc information\nfrom sys cache. So, I guess there is no harm in checking it when the\ninformation is readily available. However, for built-in functions that\ninformation is not readily available as we get required function\ninformation from FmgrBuiltin (which doesn't have parallel-safety\ninformation). For built-in functions, the following options have been\ndiscussed:\na. Extend FmgrBuiltin without increasing its size to include parallel\ninformation.\nb. Enquire pg_proc cache to get the information. Accessing this for\neach invocation of builtin could be costly. We can probably incur this\ncost only when built-in is invoked in parallel-mode.\nc. Don't add check for builtins.\n\nI think if we can't think of any other better way to have checks for\nbuiltins and don't like any of (a) or (b) then there is no harm in\n(c). This will at least allow us to have parallel-safety check for\nuser-defined functions.\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/TYAPR01MB29905A9AB82CC8BA50AB0F80FE709@TYAPR01MB2990.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 4 Jun 2021 15:47:10 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Fri, Jun 4, 2021 at 6:17 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Thoughts?\n\nAs far as I can see, trying to error out at function call time if the\nfunction is parallel-safe doesn't fix any problem we have, and just\nmakes the design of this part of the system less consistent with what\nwe've done elsewhere. For example, if you create a stable function\nthat internally calls a volatile function, you don't get an error. You\ncan use your stable function in an index definition if you wish. That\nmay break, but if so, that's your problem. Also, when it breaks, it\nprobably won't blow up the entire world; you'll just have a messed-up\nindex. Currently, the parallel-safety stuff works the same way. If we\nnotice that something is marked parallel-unsafe, we'll skip\nparallelism. But you can lie to us and claim that things are safe when\nthey're not, and if you do, it may break, but that's your problem.\nMostly likely your query will just error out, and there will be no\nworse consequences than that, though if your parallel-unsafe function\nis written in C, it could do horrible things like crash, which is\nunavoidable because C code can do anything.\n\nNow, the reason for all of this work, as I understand it, is because\nwe want to enable parallel inserts, and the problem there is that a\nparallel insert could involve a lot of different things: it might need\nto compute expressions, or fire triggers, or check constraints, and\nany of those things could be parallel-unsafe. If we enable parallelism\nand then find out that we need to do to one of those things, we have a\nproblem. Something probably will error out. The thing is, with this\nproposal, that issue is not solved. Something will definitely error\nout. You'll probably get the error in a different place, but nobody\nfires off an INSERT hoping to get one error message rather than\nanother. What they want is for it to work. So I'm kind of confused how\nwe ended up going in this direction which seems to me at least to be a\ntangent from the real issue, and somewhat at odds with the way the\nrest of PostgreSQL is designed.\n\nIt seems to me that we could simply add a flag to each relation saying\nwhether or not we think that INSERT operations - or perhaps DML\noperations generally - are believed to be parallel-safe for that\nrelation. Like the marking on functions, it would be the user's\nresponsibility to get that marking correct. If they don't, they might\ncall a parallel-unsafe function in parallel mode, and that will\nprobably error out. But that's no worse than what we already have in\nexisting cases, so I don't see why it requires doing what's proposed\nhere first. Now, it does have the advantage of being not very\nconvenient for users, who, I'm sure, would prefer that the system\nfigure out for them automatically whether or not parallel inserts are\nlikely to be safe, rather than making them declare it, especially\nsince presumably the default declaration would have to be \"unsafe,\" as\nit is for functions. But I don't have a better idea right now.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 7 Jun 2021 09:58:53 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Mon, Jun 7, 2021 at 7:29 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Jun 4, 2021 at 6:17 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Thoughts?\n>\n> As far as I can see, trying to error out at function call time if the\n> function is parallel-safe doesn't fix any problem we have, and just\n> makes the design of this part of the system less consistent with what\n> we've done elsewhere. For example, if you create a stable function\n> that internally calls a volatile function, you don't get an error. You\n> can use your stable function in an index definition if you wish. That\n> may break, but if so, that's your problem. Also, when it breaks, it\n> probably won't blow up the entire world; you'll just have a messed-up\n> index. Currently, the parallel-safety stuff works the same way. If we\n> notice that something is marked parallel-unsafe, we'll skip\n> parallelism.\n>\n\nThis is not true in all cases which is one of the reasons for this\nthread. For example, we don't skip parallelism when I/O functions are\nparallel-unsafe as is shown in the following case:\n\npostgres=# CREATE FUNCTION text_w_default_in(cstring) RETURNS\ntext_w_default AS 'textin' LANGUAGE internal STRICT IMMUTABLE;\nNOTICE: type \"text_w_default\" is not yet defined\nDETAIL: Creating a shell type definition.\nCREATE FUNCTION\n\npostgres=# CREATE FUNCTION text_w_default_out(text_w_default)\nRETURNS cstring AS 'textout' LANGUAGE internal STRICT IMMUTABLE;\nNOTICE: argument type text_w_default is only a shell\nCREATE FUNCTION\npostgres=# CREATE TYPE text_w_default ( internallength = variable,\ninput = text_w_default_in, output = text_w_default_out, alignment\n= int4, default = 'zippo');\nCREATE TYPE\npostgres=# CREATE TABLE default_test (f1 text_w_default, f2 int);\nCREATE TABLE\npostgres=# INSERT INTO default_test DEFAULT VALUES;\nINSERT 0 1\npostgres=# SELECT * FROM default_test;\nERROR: parallel-safety execution violation of function \"text_w_default_out\" (u)\n\nNote the error is raised after applying the patch, without the patch,\nthe above won't show any error (error message could be improved here).\nSuch cases can lead to unpredictable behavior without a patch because\nwe won't be able to detect the execution of parallel-unsafe functions.\nThere are similar examples from regression tests. Now, one way to deal\nwith similar cases could be that we document them and say we don't\nconsider parallel-safety in such cases and the other way is to detect\nsuch cases and error out. Yet another way could be that we somehow try\nto check these cases as well before enabling parallelism but I thought\nthese cases fall in the similar category as aggregate's support\nfunctions.\n\n> But you can lie to us and claim that things are safe when\n> they're not, and if you do, it may break, but that's your problem.\n> Mostly likely your query will just error out, and there will be no\n> worse consequences than that, though if your parallel-unsafe function\n> is written in C, it could do horrible things like crash, which is\n> unavoidable because C code can do anything.\n>\n\nThat is true but I was worried for cases where users didn't lie to us\nbut we still allowed those to choose parallelism.\n\n> Now, the reason for all of this work, as I understand it, is because\n> we want to enable parallel inserts, and the problem there is that a\n> parallel insert could involve a lot of different things: it might need\n> to compute expressions, or fire triggers, or check constraints, and\n> any of those things could be parallel-unsafe. If we enable parallelism\n> and then find out that we need to do to one of those things, we have a\n> problem. Something probably will error out. The thing is, with this\n> proposal, that issue is not solved. Something will definitely error\n> out. You'll probably get the error in a different place, but nobody\n> fires off an INSERT hoping to get one error message rather than\n> another. What they want is for it to work. So I'm kind of confused how\n> we ended up going in this direction which seems to me at least to be a\n> tangent from the real issue, and somewhat at odds with the way the\n> rest of PostgreSQL is designed.\n>\n> It seems to me that we could simply add a flag to each relation saying\n> whether or not we think that INSERT operations - or perhaps DML\n> operations generally - are believed to be parallel-safe for that\n> relation.\n>\n\nThis is exactly the direction we are trying to pursue. The proposal\n[1] has semantics like:\nCREATE TABLE table_name (...) PARALLEL DML { UNSAFE | RESTRICTED | SAFE };\n ALTER TABLE table_name PARALLEL DML { UNSAFE | RESTRICTED | SAFE };\n\nThis property is recorded in pg_class's relparallel column as 'u',\n'r', or 's', just like pg_proc's proparallel. The default is UNSAFE.\nThis might require some bike-shedding to decide how exactly we want to\nexpose it to the user but I think it is on the lines of what you have\ndescribed here.\n\n> Like the marking on functions, it would be the user's\n> responsibility to get that marking correct. If they don't, they might\n> call a parallel-unsafe function in parallel mode, and that will\n> probably error out. But that's no worse than what we already have in\n> existing cases, so I don't see why it requires doing what's proposed\n> here first.\n>\n\nI agree it is not necessarily required if we give the responsibility\nto the user but this might give a better user experience, OTOH,\nwithout this as well, as you said it won't be any worse than current\nbehavior. But that was not the sole motivation of this proposal as\nexplained above in the email by giving example.\n\n> Now, it does have the advantage of being not very\n> convenient for users, who, I'm sure, would prefer that the system\n> figure out for them automatically whether or not parallel inserts are\n> likely to be safe, rather than making them declare it, especially\n> since presumably the default declaration would have to be \"unsafe,\" as\n> it is for functions.\n>\n\nTo improve the user experience in this regard, the proposal [1]\nprovides a function pg_get_parallel_safety(oid) using which users can\ndetermine whether it is safe to enable parallelism. Surely, after the\nuser has checked with that function, one can add some unsafe\nconstraints to the table by altering the table but it will still be an\naid to enable parallelism on a relation.\n\n[1] - https://www.postgresql.org/message-id/TYAPR01MB29905A9AB82CC8BA50AB0F80FE709@TYAPR01MB2990.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 8 Jun 2021 09:03:45 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Mon, Jun 7, 2021 at 11:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Note the error is raised after applying the patch, without the patch,\n> the above won't show any error (error message could be improved here).\n> Such cases can lead to unpredictable behavior without a patch because\n> we won't be able to detect the execution of parallel-unsafe functions.\n> There are similar examples from regression tests. Now, one way to deal\n> with similar cases could be that we document them and say we don't\n> consider parallel-safety in such cases and the other way is to detect\n> such cases and error out. Yet another way could be that we somehow try\n> to check these cases as well before enabling parallelism but I thought\n> these cases fall in the similar category as aggregate's support\n> functions.\n\nI'm not very excited about the idea of checking type input and type\noutput functions. It's hard to imagine someone wanting to do something\nparallel-unsafe in such a function, unless they're just trying to\nprove a point. So I don't think checking it would be a good investment\nof CPU cycles. If we do anything at all, I'd vote for just documenting\nthat such functions should be parallel-safe and that their\nparallel-safety marks are not checked when they are used as type\ninput/output functions. Perhaps we ought to document the same thing\nwith regard to opclass support functions, another place where it's\nhard to imagine a realistic use case for doing something\nparallel-unsafe.\n\nIn the case of aggregates, I see the issues slightly differently. I\ndon't know that it's super-likely that someone would want to create a\nparallel-unsafe aggregate function, but I think there should be a way\nto do it, just in case. However, if somebody wants that, they can just\nmark the aggregate itself unsafe. There's no benefit for the user to\nmarking the aggregate safe and the support functions unsafe and hoping\nthat the system figures it out somehow.\n\nIn my opinion, you're basically taking too pure a view of this. We're\nnot trying to create a system that does such a good job checking\nparallel safety markings that nobody can possibly find a thing that\nisn't checked no matter how hard they poke around the dark corners of\nthe system. Or at least we shouldn't be trying to do that. We should\nbe trying to create a system that works well in practice, and gives\npeople the flexibility to easily avoid parallelism when they have a\nquery that is parallel-unsafe, while still getting the benefit of\nparallelism the rest of the time.\n\nI don't know what all the cases you've uncovered are, and maybe\nthere's something in there that I'd be more excited about changing if\nI knew what it was, but the particular problems you're mentioning here\nseem more theoretical than real to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 8 Jun 2021 10:51:18 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Tuesday, June 8, 2021 10:51 PM Robert Haas <robertmhaas@gmail.com>\r\n> On Mon, Jun 7, 2021 at 11:33 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > Note the error is raised after applying the patch, without the patch,\r\n> > the above won't show any error (error message could be improved here).\r\n> > Such cases can lead to unpredictable behavior without a patch because\r\n> > we won't be able to detect the execution of parallel-unsafe functions.\r\n> > There are similar examples from regression tests. Now, one way to deal\r\n> > with similar cases could be that we document them and say we don't\r\n> > consider parallel-safety in such cases and the other way is to detect\r\n> > such cases and error out. Yet another way could be that we somehow try\r\n> > to check these cases as well before enabling parallelism but I thought\r\n> > these cases fall in the similar category as aggregate's support\r\n> > functions.\r\n> \r\n> I'm not very excited about the idea of checking type input and type output\r\n> functions. It's hard to imagine someone wanting to do something\r\n> parallel-unsafe in such a function, unless they're just trying to prove a point. So\r\n> I don't think checking it would be a good investment of CPU cycles. If we do\r\n> anything at all, I'd vote for just documenting that such functions should be\r\n> parallel-safe and that their parallel-safety marks are not checked when they are\r\n> used as type input/output functions. Perhaps we ought to document the same\r\n> thing with regard to opclass support functions, another place where it's hard to\r\n> imagine a realistic use case for doing something parallel-unsafe.\r\n> \r\n> In the case of aggregates, I see the issues slightly differently. I don't know that\r\n> it's super-likely that someone would want to create a parallel-unsafe\r\n> aggregate function, but I think there should be a way to do it, just in case.\r\n> However, if somebody wants that, they can just mark the aggregate itself\r\n> unsafe. There's no benefit for the user to marking the aggregate safe and the\r\n> support functions unsafe and hoping that the system figures it out somehow.\r\n> \r\n> In my opinion, you're basically taking too pure a view of this. We're not trying to\r\n> create a system that does such a good job checking parallel safety markings\r\n> that nobody can possibly find a thing that isn't checked no matter how hard\r\n> they poke around the dark corners of the system. Or at least we shouldn't be\r\n> trying to do that. We should be trying to create a system that works well in\r\n> practice, and gives people the flexibility to easily avoid parallelism when they\r\n> have a query that is parallel-unsafe, while still getting the benefit of parallelism\r\n> the rest of the time.\r\n> \r\n> I don't know what all the cases you've uncovered are, and maybe there's\r\n> something in there that I'd be more excited about changing if I knew what it\r\n> was, but the particular problems you're mentioning here seem more\r\n> theoretical than real to me.\r\n\r\nI think another case that parallel unsafe function could be invoked in parallel mode is\r\nthe TEXT SEARCH TEMPLATE's init_function or lexize_function. Because currently, \r\nthe planner does not check the safety of these function. Please see the example below[1]\r\n\r\nI am not sure will user use parallel unsafe function in init_function or lexize_function,\r\nbut if user does, it could cause unexpected result.\r\n\r\nDoes it make sense to add some check for init_function or lexize_function\r\nor document this together with type input/output and opclass support functions ?\r\n\r\n[1]----------------------------EXAMPLE------------------------------------\r\nCREATE FUNCTION dsnowball_init(INTERNAL)\r\nRETURNS INTERNAL AS '$libdir/dict_snowball', 'dsnowball_init'\r\nLANGUAGE C STRICT;\r\n\r\nCREATE FUNCTION dsnowball_lexize(INTERNAL, INTERNAL, INTERNAL, INTERNAL)\r\nRETURNS INTERNAL AS '$libdir/dict_snowball', 'dsnowball_lexize'\r\nLANGUAGE C STRICT;\r\n\r\nCREATE TEXT SEARCH TEMPLATE snowball\r\n(INIT = dsnowball_init,\r\nLEXIZE = dsnowball_lexize);\r\n\r\nCOMMENT ON TEXT SEARCH TEMPLATE snowball IS 'snowball stemmer';\r\n\r\ncreate table pendtest (ts tsvector);\r\ncreate index pendtest_idx on pendtest using gin(ts);\r\ninsert into pendtest select (to_tsvector('Lore ipsum')) from generate_series(1,10000000,1);\r\nanalyze;\r\n\r\nset enable_bitmapscan = off;\r\n\r\npostgres=# explain select * from pendtest where to_tsquery('345&qwerty') @@ ts;\r\n QUERY PLAN\r\n--------------------------------------------------------------------------------\r\n Gather (cost=1000.00..1168292.86 rows=250 width=31)\r\n Workers Planned: 2\r\n -> Parallel Seq Scan on pendtest (cost=0.00..1167267.86 rows=104 width=31)\r\n Filter: (to_tsquery('345&qwerty'::text) @@ ts)\r\n\r\n-- In the example above, dsnowball_init() and dsnowball_lexize() will be executed in parallel mode.\r\n\r\n----------------------------EXAMPLE------------------------------------\r\n\r\nBest regards,\r\nhouzj\r\n\r\n", "msg_date": "Wed, 9 Jun 2021 06:20:43 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com> writes:\n> On Tuesday, June 8, 2021 10:51 PM Robert Haas <robertmhaas@gmail.com>\n> wrote:\n>> In my opinion, you're basically taking too pure a view of this. We're\n>> not trying to create a system that does such a good job checking\n>> parallel safety markings that nobody can possibly find a thing that\n>> isn't checked no matter how hard they poke around the dark corners of\n>> the system. Or at least we shouldn't be trying to do that.\n\n> I think another case that parallel unsafe function could be invoked in\n> parallel mode is the TEXT SEARCH TEMPLATE's init_function or\n> lexize_function.\n\nAnother point worth making in this connection is what I cited earlier\ntoday in ba2c6d6ce:\n\n: ... We could imagine prohibiting SCROLL when\n: the query contains volatile functions, but that would be\n: expensive to enforce. Moreover, it could break applications\n: that work just fine, if they have functions that are in fact\n: stable but the user neglected to mark them so. So settle for\n: documenting the hazard.\n\nIf you break an application that used to work, because the\ndeveloper was careless about marking a function PARALLEL SAFE\neven though it actually is, I do not think you have made any\nfriends or improved anyone's life. In fact, you could easily\nmake things far worse, by encouraging people to mark things\nPARALLEL SAFE that are not. (We just had a thread about somebody\nmarking a function immutable because they wanted effect X of that,\nand then whining because they also got effect Y.)\n\nThere are specific cases where there's a good reason to worry.\nFor example, if we assume blindly that domain_in() is parallel\nsafe, we will have cause to regret that. But I don't find that\nto be a reason why we need to lock down everything everywhere.\nWe need to understand the tradeoffs involved in what we check,\nand apply checks that are likely to avoid problems, while not\nbeing too nanny-ish.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Jun 2021 02:43:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Wed, Jun 9, 2021 at 2:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> There are specific cases where there's a good reason to worry.\n> For example, if we assume blindly that domain_in() is parallel\n> safe, we will have cause to regret that. But I don't find that\n> to be a reason why we need to lock down everything everywhere.\n> We need to understand the tradeoffs involved in what we check,\n> and apply checks that are likely to avoid problems, while not\n> being too nanny-ish.\n\nYeah, that's exactly how I feel about it, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Jun 2021 12:16:55 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Wed, Jun 9, 2021 at 9:47 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Jun 9, 2021 at 2:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > There are specific cases where there's a good reason to worry.\n> > For example, if we assume blindly that domain_in() is parallel\n> > safe, we will have cause to regret that. But I don't find that\n> > to be a reason why we need to lock down everything everywhere.\n> > We need to understand the tradeoffs involved in what we check,\n> > and apply checks that are likely to avoid problems, while not\n> > being too nanny-ish.\n>\n> Yeah, that's exactly how I feel about it, too.\n>\n\nFair enough. So, I think there is a consensus to drop this patch and\nif one wants then we can document these cases. Also, we don't want it\nto enable parallelism for Inserts where we are trying to pursue the\napproach to have a flag in pg_class which allows users to specify\nwhether writes are allowed on a specified relation.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 10 Jun 2021 10:24:36 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Thu, Jun 10, 2021 at 12:54 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Fair enough. So, I think there is a consensus to drop this patch and\n> if one wants then we can document these cases. Also, we don't want it\n> to enable parallelism for Inserts where we are trying to pursue the\n> approach to have a flag in pg_class which allows users to specify\n> whether writes are allowed on a specified relation.\n\n+1. The question that's still on my mind a little bit is whether\nthere's a reasonable alternative to forcing users to set a flag\nmanually. It seems less convenient than having to do the same thing\nfor a function, because most users probably only create functions\noccasionally, but creating tables seems like it's likely to be a more\ncommon operation. Plus, a function is basically a program, so it sort\nof feels reasonable that you might need to give the system some hints\nabout what the program does, but that doesn't apply to a table.\n\nNow, if we forget about partitioned tables here for a moment, I don't\nreally see why we couldn't do this computation based on the relcache\nentry, and then just cache the flag there? I think anything that would\nchange the state for a plain old table would also cause some\ninvalidation that we could notice. And I don't think that the cost of\nwalking over triggers, constraints, etc. and computing the value we\nneed on demand would be exorbitant.\n\nFor a partitioned table, things are a lot more difficult. For one\nthing, the cost of computation can be a lot higher; there might be a\nthousand or more partitions. For another thing, computing the value\ncould have scary side effects, like opening all the partitions, which\nwould also mean taking locks on them and building expensive relcache\nentries. For a third thing, we'd have no way of knowing whether the\nvalue was still current, because an event that produces an\ninvalidation for a partition doesn't necessarily produce any\ninvalidation for the partitioned table.\n\nSo one idea is maybe we only need an explicit flag for partitioned\ntables, and regular tables we can just work it out automatically.\nAnother idea is maybe we try to solve the problems somehow so that it\ncan also work with partitioned tables. I don't really have a great\nidea right at the moment, but maybe it's worth devoting some more\nthought to the problem.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Jun 2021 13:29:34 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Thu, Jun 10, 2021 at 10:59 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Jun 10, 2021 at 12:54 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Fair enough. So, I think there is a consensus to drop this patch and\n> > if one wants then we can document these cases. Also, we don't want it\n> > to enable parallelism for Inserts where we are trying to pursue the\n> > approach to have a flag in pg_class which allows users to specify\n> > whether writes are allowed on a specified relation.\n>\n> +1. The question that's still on my mind a little bit is whether\n> there's a reasonable alternative to forcing users to set a flag\n> manually. It seems less convenient than having to do the same thing\n> for a function, because most users probably only create functions\n> occasionally, but creating tables seems like it's likely to be a more\n> common operation. Plus, a function is basically a program, so it sort\n> of feels reasonable that you might need to give the system some hints\n> about what the program does, but that doesn't apply to a table.\n>\n> Now, if we forget about partitioned tables here for a moment, I don't\n> really see why we couldn't do this computation based on the relcache\n> entry, and then just cache the flag there?\n>\n\nDo we invalidate relcache entry if someone changes say trigger or some\nindex AM function property via Alter Function (in our case from safe\nto unsafe or vice-versa)? Tsunakawa-San has mentioned this as the\nreason in his email [1] why we can't rely on caching this property in\nrelcache entry. I also don't see anything in AlterFunction which would\nsuggest that we invalidate the relation with which the function might\nbe associated via trigger.\n\nThe other idea in this regard was to validate the parallel safety\nduring DDL instead of relying completely on the user but that also\nseems to have similar hazards as pointed by Tom in his email [2].\n\nI think it would be good if there is a way we can do this without\nasking for user input but if not then we can try to provide\nparallel-safety info about relation which will slightly ease the\nuser's job. Such a function would check relation (and its partitions)\nto see if there exists any parallel-unsafe clause and accordingly\nreturn the same to the user. Now, again if the user changes the\nparallel-safe property later we won't be able to automatically reflect\nthe same for rel.\n\n[1] - https://www.postgresql.org/message-id/TYAPR01MB29905A9AB82CC8BA50AB0F80FE709@TYAPR01MB2990.jpnprd01.prod.outlook.com\n[2] - https://www.postgresql.org/message-id/1030301.1616560249%40sss.pgh.pa.us\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 11 Jun 2021 09:43:23 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Fri, Jun 11, 2021 at 12:13 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Do we invalidate relcache entry if someone changes say trigger or some\n> index AM function property via Alter Function (in our case from safe\n> to unsafe or vice-versa)? Tsunakawa-San has mentioned this as the\n> reason in his email [1] why we can't rely on caching this property in\n> relcache entry. I also don't see anything in AlterFunction which would\n> suggest that we invalidate the relation with which the function might\n> be associated via trigger.\n\nHmm. I am not sure index that AM functions really need to be checked,\nbut triggers certainly do. I think if you are correct that an ALTER\nFUNCTION wouldn't invalidate the relcache entry, which is I guess\npretty much the same problem Tom was pointing out in the thread to\nwhich you linked.\n\nBut ... thinking out of the box as Tom suggests, what if we came up\nwith some new kind of invalidation message that is only sent when a\nfunction's parallel-safety marking is changed? And every backend in\nthe same database then needs to re-evaluate the parallel-safety of\nevery relation for which it has cached a value. Such recomputations\nmight be expensive, but they would probably also occur very\ninfrequently. And you might even be able to make it a bit more\nfine-grained if it's worth the effort to worry about that: say that in\naddition to caching the parallel-safety of the relation, we also cache\na list of the pg_proc OIDs upon which that determination depends. Then\nwhen we hear that the flag's been changed for OID 123456, we only need\nto invalidate the cached value for relations that depended on that\npg_proc entry. There are ways that a relation could become\nparallel-unsafe without changing the parallel-safety marking of any\nfunction, but perhaps all of the other ways involve a relcache\ninvalidation?\n\nJust brainstorming here. I might be off-track.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Jun 2021 16:25:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Sat, Jun 12, 2021 at 1:56 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Jun 11, 2021 at 12:13 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Do we invalidate relcache entry if someone changes say trigger or some\n> > index AM function property via Alter Function (in our case from safe\n> > to unsafe or vice-versa)? Tsunakawa-San has mentioned this as the\n> > reason in his email [1] why we can't rely on caching this property in\n> > relcache entry. I also don't see anything in AlterFunction which would\n> > suggest that we invalidate the relation with which the function might\n> > be associated via trigger.\n>\n> Hmm. I am not sure index that AM functions really need to be checked,\n> but triggers certainly do.\n>\n\nWhy do you think we don't need to check index AM functions? Say we\nhave an index expression that uses function and if its parallel safety\nis changed then probably that also impacts whether we can do insert in\nparallel. Because otherwise, we will end up executing some parallel\nunsafe function in parallel mode during index insertion.\n\n> I think if you are correct that an ALTER\n> FUNCTION wouldn't invalidate the relcache entry, which is I guess\n> pretty much the same problem Tom was pointing out in the thread to\n> which you linked.\n>\n> But ... thinking out of the box as Tom suggests, what if we came up\n> with some new kind of invalidation message that is only sent when a\n> function's parallel-safety marking is changed? And every backend in\n> the same database then needs to re-evaluate the parallel-safety of\n> every relation for which it has cached a value. Such recomputations\n> might be expensive, but they would probably also occur very\n> infrequently. And you might even be able to make it a bit more\n> fine-grained if it's worth the effort to worry about that: say that in\n> addition to caching the parallel-safety of the relation, we also cache\n> a list of the pg_proc OIDs upon which that determination depends. Then\n> when we hear that the flag's been changed for OID 123456, we only need\n> to invalidate the cached value for relations that depended on that\n> pg_proc entry.\n>\n\nYeah, this could be one idea but I think even if we use pg_proc OID,\nwe still need to check all the rel cache entries to find which one\ncontains the invalidated OID and that could be expensive. I wonder\ncan't we directly find the relation involved and register invalidation\nfor the same? We are able to find the relation to which trigger\nfunction is associated during drop function via findDependentObjects\nby scanning pg_depend. Assuming, we are able to find the relation for\ntrigger function by scanning pg_depend, what kinds of problems do we\nenvision in registering the invalidation for the same?\n\nI think we probably need to worry about the additional cost to find\ndependent objects and if there are any race conditions in doing so as\npointed out by Tom in his email [1]. The concern related to cost could\nbe addressed by your idea of registering such an invalidation only\nwhen the user changes the parallel safety of the function which we\ndon't expect to be a frequent operation. Now, here the race condition,\nI could think of could be that by the time we change parallel-safety\n(say making it unsafe) of a function, some of the other sessions might\nhave already started processing insert on a relation where that\nfunction is associated via trigger or check constraint in which case\nthere could be a problem. I think to avoid that we need to acquire an\nExclusive lock on the relation as we are doing in Rename Policy kind\nof operations.\n\n\n> There are ways that a relation could become\n> parallel-unsafe without changing the parallel-safety marking of any\n> function, but perhaps all of the other ways involve a relcache\n> invalidation?\n>\n\nProbably, but I guess we can once investigate/test those cases as well\nif we find/agree on the solution for the functions stuff.\n\n[1] - https://www.postgresql.org/message-id/1030301.1616560249%40sss.pgh.pa.us\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 14 Jun 2021 12:02:12 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> Why do you think we don't need to check index AM functions?\n\nPrimarily because index AMs and opclasses can only be defined by\nsuperusers, and the superuser is assumed to know what she's doing.\n\nMore generally, we've never made any provisions for the properties\nof index AMs or opclasses to change on-the-fly. I doubt that doing\nso could possibly be justified on a cost-benefit basis.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 14 Jun 2021 02:40:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Mon, Jun 14, 2021 at 2:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Why do you think we don't need to check index AM functions? Say we\n> have an index expression that uses function and if its parallel safety\n> is changed then probably that also impacts whether we can do insert in\n> parallel. Because otherwise, we will end up executing some parallel\n> unsafe function in parallel mode during index insertion.\n\nI'm not saying that we don't need to check index expressions. I agree\nthat we need to check those. The index AM functions are things like\nbtint4cmp(). I don't think that a function like that should ever be\nparallel-unsafe.\n\n> Yeah, this could be one idea but I think even if we use pg_proc OID,\n> we still need to check all the rel cache entries to find which one\n> contains the invalidated OID and that could be expensive. I wonder\n> can't we directly find the relation involved and register invalidation\n> for the same? We are able to find the relation to which trigger\n> function is associated during drop function via findDependentObjects\n> by scanning pg_depend. Assuming, we are able to find the relation for\n> trigger function by scanning pg_depend, what kinds of problems do we\n> envision in registering the invalidation for the same?\n\nI don't think that finding the relation involved and registering an\ninvalidation for the same will work properly. Suppose there is a\nconcurrently-running transaction which has created a new table and\nattached a trigger function to it. You can't see any of the catalog\nentries for that relation yet, so you can't invalidate it, but\ninvalidation needs to happen. Even if you used some snapshot that can\nsee those catalog entries before they are committed, I doubt it fixes\nthe race condition. You can't hold any lock on that relation, because\nthe creating transaction holds AccessExclusiveLock, but the whole\ninvalidation mechanism is built around the assumption that the sender\nputs messages into the shared queue first and then releases locks,\nwhile the receiver first acquires a conflicting lock and then\nprocesses messages from the queue. Without locks, that synchronization\nalgorithm can't work reliably. As a consequence of all that, I believe\nthat, not just in this particular case but in general, the\ninvalidation message needs to describe the thing that has actually\nchanged, rather than any derived property. We can make invalidations\nthat say \"some function's parallel-safety flag has changed\" or \"this\nparticular function's parallel-safety flag has changed\" or \"this\nparticular function has changed in some way\" (this one, we have\nalready), but anything that involves trying to figure out what the\nconsequences of such a change might be and saying \"hey, you, please\nupdate XYZ because I changed something somewhere that could affect\nthat\" is not going to be correct.\n\n> I think we probably need to worry about the additional cost to find\n> dependent objects and if there are any race conditions in doing so as\n> pointed out by Tom in his email [1]. The concern related to cost could\n> be addressed by your idea of registering such an invalidation only\n> when the user changes the parallel safety of the function which we\n> don't expect to be a frequent operation. Now, here the race condition,\n> I could think of could be that by the time we change parallel-safety\n> (say making it unsafe) of a function, some of the other sessions might\n> have already started processing insert on a relation where that\n> function is associated via trigger or check constraint in which case\n> there could be a problem. I think to avoid that we need to acquire an\n> Exclusive lock on the relation as we are doing in Rename Policy kind\n> of operations.\n\nWell, the big issue here is that we don't actually lock functions\nwhile they are in use. So there's absolutely nothing that prevents a\nfunction from being altered in any arbitrary way, or even dropped,\nwhile code that uses it is running. I don't really know what happens\nin practice if you do that sort of thing: can you get the same query\nto run with one function definition for the first part of execution\nand some other definition for the rest of execution? I tend to doubt\nit, because I suspect we cache the function definition at some point.\nIf that's the case, caching the parallel-safety marking at the same\npoint seems OK too, or at least no worse than what we're doing\nalready. But on the other hand if it is possible for a query's notion\nof the function definition to shift while the query is in flight, then\nthis is just another example of that and no worse than any other.\nInstead of changing the parallel-safety flag, somebody could redefine\nthe function so that it divides by zero or produces a syntax error and\nkaboom, running queries break. Either way, I don't see what the big\ndeal is. As long as we make the handling of parallel-safety consistent\nwith other ways the function could be concurrently redefined, it won't\nsuck any more than the current system already does, or in any\nfundamentally new ways.\n\nEven if this line of thinking is correct, there's a big issue for\npartitioning hierarchies because there you need to know stuff about\nrelations that you don't have any other reason to open. I'm just\narguing that if there's no partitioning, the problem seems reasonably\nsolvable. Either you changed something about the relation, in which\ncase you've got to lock it and issue invalidations, or you've changed\nsomething about the function, which could be handled via a new type of\ninvalidation. I don't really see why the cost would be particularly\nbad. Suppose that for every relation, you have a flag which is either\nPARALLEL_DML_SAFE, PARALLEL_DML_RESTRICTED, PARALLEL_DML_UNSAFE, or\nPARALLEL_DML_SAFETY_UNKNOWN. When someone sends a message saying \"some\nexisting function's parallel-safety changed!\" you reset that flag for\nevery relation in the relcache to PARALLEL_DML_SAFETY_UNKNOWN. Then if\nsomebody does DML on that relation and we want to consider\nparallelism, it's got to recompute that flag. None of that sounds\nhorribly expensive.\n\nI mean, it could be somewhat annoying if you have 100k relations open\nand sit around all day flipping parallel-safety markings on and off\nand then doing a single-row insert after each flip, but if that's the\nonly scenario where we incur significant extra overhead from this kind\nof design, it seems clearly better than forcing users to set a flag\nmanually. Maybe it isn't, but I don't really see what the other\nproblem would be right now. Except, of course, for partitioning, which\nI'm not quite sure what to do about.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Jun 2021 11:38:35 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Mon, Jun 14, 2021 at 9:08 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jun 14, 2021 at 2:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Why do you think we don't need to check index AM functions? Say we\n> > have an index expression that uses function and if its parallel safety\n> > is changed then probably that also impacts whether we can do insert in\n> > parallel. Because otherwise, we will end up executing some parallel\n> > unsafe function in parallel mode during index insertion.\n>\n> I'm not saying that we don't need to check index expressions. I agree\n> that we need to check those. The index AM functions are things like\n> btint4cmp(). I don't think that a function like that should ever be\n> parallel-unsafe.\n>\n\nOkay, but I think if we go with your suggested model where whenever\nthere is a change in parallel-safety of any function, we need to send\nthe new invalidation then I think it won't matter whether the function\nis associated with index expression, check constraint in the table, or\nis used in any other way.\n\n>\n> Even if this line of thinking is correct, there's a big issue for\n> partitioning hierarchies because there you need to know stuff about\n> relations that you don't have any other reason to open. I'm just\n> arguing that if there's no partitioning, the problem seems reasonably\n> solvable. Either you changed something about the relation, in which\n> case you've got to lock it and issue invalidations, or you've changed\n> something about the function, which could be handled via a new type of\n> invalidation. I don't really see why the cost would be particularly\n> bad. Suppose that for every relation, you have a flag which is either\n> PARALLEL_DML_SAFE, PARALLEL_DML_RESTRICTED, PARALLEL_DML_UNSAFE, or\n> PARALLEL_DML_SAFETY_UNKNOWN. When someone sends a message saying \"some\n> existing function's parallel-safety changed!\" you reset that flag for\n> every relation in the relcache to PARALLEL_DML_SAFETY_UNKNOWN. Then if\n> somebody does DML on that relation and we want to consider\n> parallelism, it's got to recompute that flag. None of that sounds\n> horribly expensive.\n>\n\nSounds reasonable. I will think more on this and see if anything else\ncomes to mind apart from what you have mentioned.\n\n> I mean, it could be somewhat annoying if you have 100k relations open\n> and sit around all day flipping parallel-safety markings on and off\n> and then doing a single-row insert after each flip, but if that's the\n> only scenario where we incur significant extra overhead from this kind\n> of design, it seems clearly better than forcing users to set a flag\n> manually. Maybe it isn't, but I don't really see what the other\n> problem would be right now. Except, of course, for partitioning, which\n> I'm not quite sure what to do about.\n>\n\nYeah, dealing with partitioned tables is tricky. I think if we don't\nwant to check upfront the parallel safety of all the partitions then\nthe other option as discussed could be to ask the user to specify the\nparallel safety of partitioned tables. We can additionally check the\nparallel safety of partitions when we are trying to insert into a\nparticular partition and error out if we detect any parallel-unsafe\nclause and we are in parallel-mode. So, in this case, we won't be\ncompletely relying on the users. Users can either change the parallel\nsafe option of the table or remove/change the parallel-unsafe clause\nafter error. The new invalidation message as we are discussing would\ninvalidate the parallel-safety for individual partitions but not the\nroot partition (partitioned table itself). For root partition, we will\nrely on information specified by the user.\n\nI am not sure if we have a simple way to check the parallel safety of\npartitioned tables. In some way, we need to rely on user either (a) by\nproviding an option to specify whether parallel Inserts (and/or other\nDMLs) can be performed, or (b) by providing a guc and/or rel option\nwhich indicate that we can check the parallel-safety of all the\npartitions. Yet another option that I don't like could be to\nparallelize inserts on non-partitioned tables.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 15 Jun 2021 16:35:42 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Tue, Jun 15, 2021 at 7:05 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Okay, but I think if we go with your suggested model where whenever\n> there is a change in parallel-safety of any function, we need to send\n> the new invalidation then I think it won't matter whether the function\n> is associated with index expression, check constraint in the table, or\n> is used in any other way.\n\nNo, it will still matter, because I'm proposing that the\nparallel-safety of functions that we only access via operator classes\nwill not even be checked. Also, if we decided to make the system more\nfine-grained - e.g. by invalidating on the specific OID of the\nfunction that was changed rather than doing something that is\ndatabase-wide or global - then it matters even more.\n\n> Yeah, dealing with partitioned tables is tricky. I think if we don't\n> want to check upfront the parallel safety of all the partitions then\n> the other option as discussed could be to ask the user to specify the\n> parallel safety of partitioned tables.\n\nJust to be clear here, I don't think it really matters what we *want*\nto do. I don't think it's reasonably *possible* to check all the\npartitions, because we don't hold locks on them. When we're assessing\na bunch of stuff related to an individual relation, we have a lock on\nit. I think - though we should double-check tablecmds.c - that this is\nenough to prevent all of the dependent objects - triggers,\nconstraints, etc. - from changing. So the stuff we care about is\nstable. But the situation with a partitioned table is different. In\nthat case, we can't even examine that stuff without locking all the\npartitions. And even if we do lock all the partitions, the stuff could\nchange immediately afterward and we wouldn't know. So I think it would\nbe difficult to make it correct.\n\nNow, maybe it could be done, and I think that's worth a little more\nthought. For example, perhaps whenever we invalidate a relation, we\ncould also somehow send some new, special kind of invalidation for its\nparent saying, essentially, \"hey, one of your children has changed in\na way you might care about.\" But that's not good enough, because it\nonly goes up one level. The grandparent would still be unaware that a\nchange it potentially cares about has occurred someplace down in the\npartitioning hierarchy. That seems hard to patch up, again because of\nthe locking rules. The child can know the OID of its parent without\nlocking the parent, but it can't know the OID of its grandparent\nwithout locking its parent. Walking up the whole partitioning\nhierarchy might be an issue for a number of reasons, including\npossible deadlocks, and possible race conditions where we don't emit\nall of the right invalidations in the face of concurrent changes. So I\ndon't quite see a way around this part of the problem, but I may well\nbe missing something. In fact I hope I am missing something, because\nsolving this problem would be really nice.\n\n> We can additionally check the\n> parallel safety of partitions when we are trying to insert into a\n> particular partition and error out if we detect any parallel-unsafe\n> clause and we are in parallel-mode. So, in this case, we won't be\n> completely relying on the users. Users can either change the parallel\n> safe option of the table or remove/change the parallel-unsafe clause\n> after error. The new invalidation message as we are discussing would\n> invalidate the parallel-safety for individual partitions but not the\n> root partition (partitioned table itself). For root partition, we will\n> rely on information specified by the user.\n\nYeah, that may be the best we can do. Just to be clear, I think we\nwould want to check whether the relation is still parallel-safe at the\nstart of the operation, but not have a run-time check at each function\ncall.\n\n> I am not sure if we have a simple way to check the parallel safety of\n> partitioned tables. In some way, we need to rely on user either (a) by\n> providing an option to specify whether parallel Inserts (and/or other\n> DMLs) can be performed, or (b) by providing a guc and/or rel option\n> which indicate that we can check the parallel-safety of all the\n> partitions. Yet another option that I don't like could be to\n> parallelize inserts on non-partitioned tables.\n\nIf we figure out a way to check the partitions automatically that\nactually works, we don't need a switch for it; we can (and should)\njust do it that way all the time. But if we can't come up with a\ncorrect algorithm for that, then we'll need to add some kind of option\nwhere the user declares whether it's OK.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Jun 2021 10:00:49 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Mon, Jun 14, 2021 at 9:08 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jun 14, 2021 at 2:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > Yeah, this could be one idea but I think even if we use pg_proc OID,\n> > we still need to check all the rel cache entries to find which one\n> > contains the invalidated OID and that could be expensive. I wonder\n> > can't we directly find the relation involved and register invalidation\n> > for the same? We are able to find the relation to which trigger\n> > function is associated during drop function via findDependentObjects\n> > by scanning pg_depend. Assuming, we are able to find the relation for\n> > trigger function by scanning pg_depend, what kinds of problems do we\n> > envision in registering the invalidation for the same?\n>\n> I don't think that finding the relation involved and registering an\n> invalidation for the same will work properly. Suppose there is a\n> concurrently-running transaction which has created a new table and\n> attached a trigger function to it. You can't see any of the catalog\n> entries for that relation yet, so you can't invalidate it, but\n> invalidation needs to happen. Even if you used some snapshot that can\n> see those catalog entries before they are committed, I doubt it fixes\n> the race condition. You can't hold any lock on that relation, because\n> the creating transaction holds AccessExclusiveLock, but the whole\n> invalidation mechanism is built around the assumption that the sender\n> puts messages into the shared queue first and then releases locks,\n> while the receiver first acquires a conflicting lock and then\n> processes messages from the queue.\n>\n\nWon't such messages be proceesed at start transaction time\n(AtStart_Cache->AcceptInvalidationMessages)?\n\n> Without locks, that synchronization\n> algorithm can't work reliably. As a consequence of all that, I believe\n> that, not just in this particular case but in general, the\n> invalidation message needs to describe the thing that has actually\n> changed, rather than any derived property. We can make invalidations\n> that say \"some function's parallel-safety flag has changed\" or \"this\n> particular function's parallel-safety flag has changed\" or \"this\n> particular function has changed in some way\" (this one, we have\n> already), but anything that involves trying to figure out what the\n> consequences of such a change might be and saying \"hey, you, please\n> update XYZ because I changed something somewhere that could affect\n> that\" is not going to be correct.\n>\n> > I think we probably need to worry about the additional cost to find\n> > dependent objects and if there are any race conditions in doing so as\n> > pointed out by Tom in his email [1]. The concern related to cost could\n> > be addressed by your idea of registering such an invalidation only\n> > when the user changes the parallel safety of the function which we\n> > don't expect to be a frequent operation. Now, here the race condition,\n> > I could think of could be that by the time we change parallel-safety\n> > (say making it unsafe) of a function, some of the other sessions might\n> > have already started processing insert on a relation where that\n> > function is associated via trigger or check constraint in which case\n> > there could be a problem. I think to avoid that we need to acquire an\n> > Exclusive lock on the relation as we are doing in Rename Policy kind\n> > of operations.\n>\n> Well, the big issue here is that we don't actually lock functions\n> while they are in use. So there's absolutely nothing that prevents a\n> function from being altered in any arbitrary way, or even dropped,\n> while code that uses it is running. I don't really know what happens\n> in practice if you do that sort of thing: can you get the same query\n> to run with one function definition for the first part of execution\n> and some other definition for the rest of execution? I tend to doubt\n> it, because I suspect we cache the function definition at some point.\n>\n\nIt is possible that in the same statement execution a different\nfunction definition can be executed. Say, in session-1 we are\ninserting three rows, on first-row execution definition-1 of function\nin index expression gets executed. Now, from session-2, we change the\ndefinition of the function to definition-2. Now, in session-1, on\nsecond-row insertion, while executing definition-1 of function, we\ninsert in another table that will accept the invalidation message\nregistered in session-2. Now, on third-row insertion, the new\ndefinition (definition-2) of function will be executed.\n\n> If that's the case, caching the parallel-safety marking at the same\n> point seems OK too, or at least no worse than what we're doing\n> already. But on the other hand if it is possible for a query's notion\n> of the function definition to shift while the query is in flight, then\n> this is just another example of that and no worse than any other.\n> Instead of changing the parallel-safety flag, somebody could redefine\n> the function so that it divides by zero or produces a syntax error and\n> kaboom, running queries break. Either way, I don't see what the big\n> deal is. As long as we make the handling of parallel-safety consistent\n> with other ways the function could be concurrently redefined, it won't\n> suck any more than the current system already does, or in any\n> fundamentally new ways.\n>\n\nOkay, so, in this scheme, we have allowed changing the function\ndefinition during statement execution but even though the rel's\nparallel-safe property gets modified (say to parallel-unsafe), we will\nstill proceed in parallel-mode as if it's not changed. I guess this\nmay not be a big deal as we can anyway allow breaking the running\nstatement by changing its definition and users may be okay if the\nparallel statement errors out or behave in an unpredictable way in\nsuch corner cases.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 15 Jun 2021 20:11:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Tuesday, June 15, 2021 10:01 PM Robert Haas <robertmhaas@gmail.com> wrote:\r\n> On Tue, Jun 15, 2021 at 7:05 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> > Yeah, dealing with partitioned tables is tricky. I think if we don't\r\n> > want to check upfront the parallel safety of all the partitions then\r\n> > the other option as discussed could be to ask the user to specify the\r\n> > parallel safety of partitioned tables.\r\n> \r\n> Just to be clear here, I don't think it really matters what we *want* to do. I don't\r\n> think it's reasonably *possible* to check all the partitions, because we don't\r\n> hold locks on them. When we're assessing a bunch of stuff related to an\r\n> individual relation, we have a lock on it. I think - though we should\r\n> double-check tablecmds.c - that this is enough to prevent all of the dependent\r\n> objects - triggers, constraints, etc. - from changing. So the stuff we care about\r\n> is stable. But the situation with a partitioned table is different. In that case, we\r\n> can't even examine that stuff without locking all the partitions. And even if we\r\n> do lock all the partitions, the stuff could change immediately afterward and we\r\n> wouldn't know. So I think it would be difficult to make it correct.\r\n> \r\n> Now, maybe it could be done, and I think that's worth a little more thought. For\r\n> example, perhaps whenever we invalidate a relation, we could also somehow\r\n> send some new, special kind of invalidation for its parent saying, essentially,\r\n> \"hey, one of your children has changed in a way you might care about.\" But\r\n> that's not good enough, because it only goes up one level. The grandparent\r\n> would still be unaware that a change it potentially cares about has occurred\r\n> someplace down in the partitioning hierarchy. That seems hard to patch up,\r\n> again because of the locking rules. The child can know the OID of its parent\r\n> without locking the parent, but it can't know the OID of its grandparent without\r\n> locking its parent. Walking up the whole partitioning hierarchy might be an\r\n> issue for a number of reasons, including possible deadlocks, and possible race\r\n> conditions where we don't emit all of the right invalidations in the face of\r\n> concurrent changes. So I don't quite see a way around this part of the problem,\r\n> but I may well be missing something. In fact I hope I am missing something,\r\n> because solving this problem would be really nice.\r\n\r\nI think the check of partition could be even more complicated if we need to\r\ncheck the parallel safety of partition key expression. If user directly insert into\r\na partition, then we need invoke ExecPartitionCheck which will execute all it's\r\nparent's and grandparent's partition key expressions. It means if we change a\r\nparent table's partition key expression(by 1) change function in expr or 2) attach\r\nthe parent table as partition of another parent table), then we need to invalidate\r\nall its child's relcache.\r\n\r\nBTW, currently, If user attach a partitioned table 'A' to be partition of another\r\npartitioned table 'B', the child of 'A' will not be invalidated.\r\n\r\nBest regards,\r\nhouzj\r\n", "msg_date": "Wed, 16 Jun 2021 03:27:24 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Tue, Jun 15, 2021 at 7:31 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Jun 15, 2021 at 7:05 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Okay, but I think if we go with your suggested model where whenever\n> > there is a change in parallel-safety of any function, we need to send\n> > the new invalidation then I think it won't matter whether the function\n> > is associated with index expression, check constraint in the table, or\n> > is used in any other way.\n>\n> No, it will still matter, because I'm proposing that the\n> parallel-safety of functions that we only access via operator classes\n> will not even be checked.\n>\n\nI am not very clear on what exactly you have in your mind in this\nregard. I understand that while computing parallel-safety for a rel we\ndon't need to consider functions that we only access via operator\nclass but how do we distinguish such functions during Alter Function?\nIs there a simple way to deduce that this is an operator class\nfunction so don't register invalidation for it? Shall we check it via\npg_depend?\n\n>\n> > We can additionally check the\n> > parallel safety of partitions when we are trying to insert into a\n> > particular partition and error out if we detect any parallel-unsafe\n> > clause and we are in parallel-mode. So, in this case, we won't be\n> > completely relying on the users. Users can either change the parallel\n> > safe option of the table or remove/change the parallel-unsafe clause\n> > after error. The new invalidation message as we are discussing would\n> > invalidate the parallel-safety for individual partitions but not the\n> > root partition (partitioned table itself). For root partition, we will\n> > rely on information specified by the user.\n>\n> Yeah, that may be the best we can do. Just to be clear, I think we\n> would want to check whether the relation is still parallel-safe at the\n> start of the operation, but not have a run-time check at each function\n> call.\n>\n\nAgreed, that is what I also had in mind.\n\n> > I am not sure if we have a simple way to check the parallel safety of\n> > partitioned tables. In some way, we need to rely on user either (a) by\n> > providing an option to specify whether parallel Inserts (and/or other\n> > DMLs) can be performed, or (b) by providing a guc and/or rel option\n> > which indicate that we can check the parallel-safety of all the\n> > partitions. Yet another option that I don't like could be to\n> > parallelize inserts on non-partitioned tables.\n>\n> If we figure out a way to check the partitions automatically that\n> actually works, we don't need a switch for it; we can (and should)\n> just do it that way all the time. But if we can't come up with a\n> correct algorithm for that, then we'll need to add some kind of option\n> where the user declares whether it's OK.\n>\n\nYeah, so let us think for some more time and see if we can come up\nwith something better for partitions, otherwise, we can sort out\nthings further in this direction.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 16 Jun 2021 10:55:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Tue, Jun 15, 2021 at 8:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jun 14, 2021 at 9:08 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Mon, Jun 14, 2021 at 2:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > Yeah, this could be one idea but I think even if we use pg_proc OID,\n> > > we still need to check all the rel cache entries to find which one\n> > > contains the invalidated OID and that could be expensive. I wonder\n> > > can't we directly find the relation involved and register invalidation\n> > > for the same? We are able to find the relation to which trigger\n> > > function is associated during drop function via findDependentObjects\n> > > by scanning pg_depend. Assuming, we are able to find the relation for\n> > > trigger function by scanning pg_depend, what kinds of problems do we\n> > > envision in registering the invalidation for the same?\n> >\n> > I don't think that finding the relation involved and registering an\n> > invalidation for the same will work properly. Suppose there is a\n> > concurrently-running transaction which has created a new table and\n> > attached a trigger function to it. You can't see any of the catalog\n> > entries for that relation yet, so you can't invalidate it, but\n> > invalidation needs to happen. Even if you used some snapshot that can\n> > see those catalog entries before they are committed, I doubt it fixes\n> > the race condition. You can't hold any lock on that relation, because\n> > the creating transaction holds AccessExclusiveLock, but the whole\n> > invalidation mechanism is built around the assumption that the sender\n> > puts messages into the shared queue first and then releases locks,\n> > while the receiver first acquires a conflicting lock and then\n> > processes messages from the queue.\n> >\n>\n> Won't such messages be proceesed at start transaction time\n> (AtStart_Cache->AcceptInvalidationMessages)?\n>\n\nEven if accept invalidation at the start transaction time, we need to\naccept and execute it after taking a lock on relation to ensure that\nrelation doesn't change afterward. I think what I mentioned didn't\nbreak this assumption because after finding a relation we will take a\nlock on it before registering the invalidation, so in the above\nscenario, it should wait before registering the invalidation.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 16 Jun 2021 10:57:27 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Tuesday, June 15, 2021 10:01 PM Robert Haas <robertmhaas@gmail.com> wrote:\r\n> On Tue, Jun 15, 2021 at 7:05 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> > Okay, but I think if we go with your suggested model where whenever\r\n> > there is a change in parallel-safety of any function, we need to send\r\n> > the new invalidation then I think it won't matter whether the function\r\n> > is associated with index expression, check constraint in the table, or\r\n> > is used in any other way.\r\n> \r\n> No, it will still matter, because I'm proposing that the parallel-safety of\r\n> functions that we only access via operator classes will not even be checked.\r\n> Also, if we decided to make the system more fine-grained - e.g. by invalidating\r\n> on the specific OID of the function that was changed rather than doing\r\n> something that is database-wide or global - then it matters even more.\r\n> \r\n> > Yeah, dealing with partitioned tables is tricky. I think if we don't\r\n> > want to check upfront the parallel safety of all the partitions then\r\n> > the other option as discussed could be to ask the user to specify the\r\n> > parallel safety of partitioned tables.\r\n> \r\n> Just to be clear here, I don't think it really matters what we *want* to do. I don't\r\n> think it's reasonably *possible* to check all the partitions, because we don't\r\n> hold locks on them. When we're assessing a bunch of stuff related to an\r\n> individual relation, we have a lock on it. I think - though we should\r\n> double-check tablecmds.c - that this is enough to prevent all of the dependent\r\n> objects - triggers, constraints, etc. - from changing. So the stuff we care about\r\n> is stable. But the situation with a partitioned table is different. In that case, we\r\n> can't even examine that stuff without locking all the partitions. And even if we\r\n> do lock all the partitions, the stuff could change immediately afterward and we\r\n> wouldn't know. So I think it would be difficult to make it correct.\r\n> \r\n> Now, maybe it could be done, and I think that's worth a little more thought. For\r\n> example, perhaps whenever we invalidate a relation, we could also somehow\r\n> send some new, special kind of invalidation for its parent saying, essentially,\r\n> \"hey, one of your children has changed in a way you might care about.\" But\r\n> that's not good enough, because it only goes up one level. The grandparent\r\n> would still be unaware that a change it potentially cares about has occurred\r\n> someplace down in the partitioning hierarchy. That seems hard to patch up,\r\n> again because of the locking rules. The child can know the OID of its parent\r\n> without locking the parent, but it can't know the OID of its grandparent without\r\n> locking its parent. Walking up the whole partitioning hierarchy might be an\r\n> issue for a number of reasons, including possible deadlocks, and possible race\r\n> conditions where we don't emit all of the right invalidations in the face of\r\n> concurrent changes. So I don't quite see a way around this part of the problem,\r\n> but I may well be missing something. In fact I hope I am missing something,\r\n> because solving this problem would be really nice.\r\n\r\nFor partition, I think postgres already have the logic about recursively finding\r\nthe parent table[1]. Can we copy that logic to send serval invalid messages that\r\ninvalidate the parent and grandparent... relcache if change a partition's parallel safety ?\r\nAlthough, it means we need more lock(on its parents) when the parallel safety\r\nchanged, but it seems it's not a frequent scenario and looks acceptable.\r\n\r\n[1] In generate_partition_qual()\r\n\tparentrelid = get_partition_parent(RelationGetRelid(rel), true);\r\n\tparent = relation_open(parentrelid, AccessShareLock);\r\n\t...\r\n\t/* Add the parent's quals to the list (if any) */\r\n\tif (parent->rd_rel->relispartition)\r\n\t\tresult = list_concat(generate_partition_qual(parent), my_qual);\r\n\r\n\r\nBesides, I have a possible crazy idea that maybe it's not necessary to invalidate the\r\nrelcache when parallel safety of function is changed.\r\n\r\nI take a look at what postgres currently behaves, and found that even if user changes\r\na function (CREATE OR REPLACE/ALTER FUNCTION) which is used in\r\nobjects(like: constraint or index expression or partition key expression),\r\nthe data in the relation won't be rechecked. And as the doc said[2], It is *not recommended*\r\nto change the function which is already used in some other objects. The\r\nrecommended way to handle such a change is to drop the object, adjust the function\r\ndefinition, and re-add the objects. Maybe we only care about the parallel safety\r\nchange when create or drop an object(constraint or index or partition or trigger). And\r\nwe can check the parallel safety when insert into a particular table, if find functions\r\nnot allowed in parallel mode which means someone change the function's parallel safety,\r\nthen we can invalidate the relcache and error out.\r\n\r\n[2]https://www.postgresql.org/docs/14/ddl-constraints.html\r\n\r\nBest regards,\r\nhouzj\r\n", "msg_date": "Wed, 16 Jun 2021 12:40:25 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Tue, Jun 15, 2021 at 10:41 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I don't think that finding the relation involved and registering an\n> > invalidation for the same will work properly. Suppose there is a\n> > concurrently-running transaction which has created a new table and\n> > attached a trigger function to it. You can't see any of the catalog\n> > entries for that relation yet, so you can't invalidate it, but\n> > invalidation needs to happen. Even if you used some snapshot that can\n> > see those catalog entries before they are committed, I doubt it fixes\n> > the race condition. You can't hold any lock on that relation, because\n> > the creating transaction holds AccessExclusiveLock, but the whole\n> > invalidation mechanism is built around the assumption that the sender\n> > puts messages into the shared queue first and then releases locks,\n> > while the receiver first acquires a conflicting lock and then\n> > processes messages from the queue.\n>\n> Won't such messages be proceesed at start transaction time\n> (AtStart_Cache->AcceptInvalidationMessages)?\n\nOnly if they show up in the queue before that. But there's nothing\nforcing that to happen. You don't seem to understand how important\nheavyweight locking is to the whole shared invalidation message\nsystem....\n\n> Okay, so, in this scheme, we have allowed changing the function\n> definition during statement execution but even though the rel's\n> parallel-safe property gets modified (say to parallel-unsafe), we will\n> still proceed in parallel-mode as if it's not changed. I guess this\n> may not be a big deal as we can anyway allow breaking the running\n> statement by changing its definition and users may be okay if the\n> parallel statement errors out or behave in an unpredictable way in\n> such corner cases.\n\nYeah, I mean, it's no different than leaving the parallel-safety\nmarking exactly as it was and changing the body of the function to\ncall some other function marked parallel-unsafe. I don't think we've\ngotten any complaints about that, because I don't think it would\nnormally have any really bad consequences; most likely you'd just get\nan error saying that something-or-other isn't allowed in parallel\nmode. If it does have bad consequences, then I guess we'll have to fix\nit when we find out about it, but in the meantime there's no reason to\nhold the parallel-safety flag to a stricter standard than the function\nbody.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 16 Jun 2021 11:52:45 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Wed, Jun 16, 2021 at 9:22 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Jun 15, 2021 at 10:41 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > I don't think that finding the relation involved and registering an\n> > > invalidation for the same will work properly. Suppose there is a\n> > > concurrently-running transaction which has created a new table and\n> > > attached a trigger function to it. You can't see any of the catalog\n> > > entries for that relation yet, so you can't invalidate it, but\n> > > invalidation needs to happen. Even if you used some snapshot that can\n> > > see those catalog entries before they are committed, I doubt it fixes\n> > > the race condition. You can't hold any lock on that relation, because\n> > > the creating transaction holds AccessExclusiveLock, but the whole\n> > > invalidation mechanism is built around the assumption that the sender\n> > > puts messages into the shared queue first and then releases locks,\n> > > while the receiver first acquires a conflicting lock and then\n> > > processes messages from the queue.\n> >\n> > Won't such messages be proceesed at start transaction time\n> > (AtStart_Cache->AcceptInvalidationMessages)?\n>\n> Only if they show up in the queue before that. But there's nothing\n> forcing that to happen. You don't seem to understand how important\n> heavyweight locking is to the whole shared invalidation message\n> system....\n>\n\nI have responded about heavy-weight locking stuff in my next email [1]\nand why I think the approach I mentioned will work. I don't deny that\nI might be missing something here.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1%2BT2CWqp40YqYttDA1Skk7wK6yDrkCD5GZ80QGr5ze-6g%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 17 Jun 2021 14:24:35 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Thu, Jun 17, 2021 at 4:54 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I have responded about heavy-weight locking stuff in my next email [1]\n> and why I think the approach I mentioned will work. I don't deny that\n> I might be missing something here.\n>\n> [1] - https://www.postgresql.org/message-id/CAA4eK1%2BT2CWqp40YqYttDA1Skk7wK6yDrkCD5GZ80QGr5ze-6g%40mail.gmail.com\n\nI mean I saw that but I don't see how it addresses the visibility\nissue. There could be a relation that is not visible to your snapshot\nand upon which AccessExclusiveLock is held which needs to be\ninvalidated. You can't lock it because it's AccessExclusiveLock'd\nalready.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Jun 2021 10:59:10 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Thu, Jun 17, 2021 at 8:29 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Jun 17, 2021 at 4:54 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I have responded about heavy-weight locking stuff in my next email [1]\n> > and why I think the approach I mentioned will work. I don't deny that\n> > I might be missing something here.\n> >\n> > [1] - https://www.postgresql.org/message-id/CAA4eK1%2BT2CWqp40YqYttDA1Skk7wK6yDrkCD5GZ80QGr5ze-6g%40mail.gmail.com\n>\n> I mean I saw that but I don't see how it addresses the visibility\n> issue.\n>\n\nI thought if we scan a system catalog using DirtySnapshot, then we\nshould be able to find such a relation. But, if the system catalog is\nupdated after our scan then surely we won't be able to see it and in\nthat case, we won't be able to send invalidation. Now, say if the rel\nis not visible to us because of the snapshot we used or due to some\nrace condition then we won't be able to send the invalidation but why\nwe want to consider it worse than the case where we miss such\ninvalidations (invalidations due to change of parallel-safe property)\nwhen the insertion into relation is in-progress.\n\n> There could be a relation that is not visible to your snapshot\n> and upon which AccessExclusiveLock is held which needs to be\n> invalidated. You can't lock it because it's AccessExclusiveLock'd\n> already.\n>\n\nYeah, the session in which we are doing Alter Function won't be able\nto lock it but it will wait for the AccessExclusiveLock on the rel to\nbe released because it will also try to acquire it before sending\ninvalidation.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 21 Jun 2021 10:26:38 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Wednesday, June 16, 2021 11:27 AM houzj.fnst@fujitsu.com wrote:\r\n> On Tuesday, June 15, 2021 10:01 PM Robert Haas <robertmhaas@gmail.com> wrote:\r\n> > Just to be clear here, I don't think it really matters what we *want*\r\n> > to do. I don't think it's reasonably *possible* to check all the\r\n> > partitions, because we don't hold locks on them. When we're assessing\r\n> > a bunch of stuff related to an individual relation, we have a lock on\r\n> > it. I think - though we should double-check tablecmds.c - that this is\r\n> > enough to prevent all of the dependent objects - triggers,\r\n> > constraints, etc. - from changing. So the stuff we care about is\r\n> > stable. But the situation with a partitioned table is different. In\r\n> > that case, we can't even examine that stuff without locking all the\r\n> > partitions. And even if we do lock all the partitions, the stuff could change\r\n> immediately afterward and we wouldn't know. So I think it would be difficult to\r\n> >make it correct.\r\n> > Now, maybe it could be done, and I think that's worth a little more\r\n> > thought. For example, perhaps whenever we invalidate a relation, we\r\n> > could also somehow send some new, special kind of invalidation for its\r\n> > parent saying, essentially, \"hey, one of your children has changed in\r\n> > a way you might care about.\" But that's not good enough, because it\r\n> > only goes up one level. The grandparent would still be unaware that a\r\n> > change it potentially cares about has occurred someplace down in the\r\n> > partitioning hierarchy. That seems hard to patch up, again because of\r\n> > the locking rules. The child can know the OID of its parent without\r\n> > locking the parent, but it can't know the OID of its grandparent\r\n> > without locking its parent. Walking up the whole partitioning\r\n> > hierarchy might be an issue for a number of reasons, including\r\n> > possible deadlocks, and possible race conditions where we don't emit\r\n> > all of the right invalidations in the face of concurrent changes. So I\r\n> > don't quite see a way around this part of the problem, but I may well be\r\n> missing something. In fact I hope I am missing something, because solving this\r\n> problem would be really nice.\r\n> \r\n> I think the check of partition could be even more complicated if we need to\r\n> check the parallel safety of partition key expression. If user directly insert into a\r\n> partition, then we need invoke ExecPartitionCheck which will execute all it's\r\n> parent's and grandparent's partition key expressions. It means if we change a\r\n> parent table's partition key expression(by 1) change function in expr or 2)\r\n> attach the parent table as partition of another parent table), then we need to\r\n> invalidate all its child's relcache.\r\n> \r\n> BTW, currently, If user attach a partitioned table 'A' to be partition of another\r\n> partitioned table 'B', the child of 'A' will not be invalidated.\r\n\r\nTo be honest, I didn't find a cheap way to invalidate partitioned table's\r\nparallel safety automatically. For one thing, We need to recurse higher\r\nin the partition tree to invalid all the parent table's relcache(and perhaps\r\nall its children's relcache) not only when alter function parallel safety,\r\nbut also for DDLs which will invalid the partition's relcache, such as\r\nCREATE/DROP INDEX/TRIGGER/CONSTRAINT. It seems too expensive. For another,\r\neven if we can invalidate the partitioned table's parallel safety\r\nautomatically, we still need to lock all the partition when checking table's\r\nparallel safety, because the partition's parallel safety could be changed\r\nafter checking the parallel safety.\r\n\r\nSo, IMO, at least for partitioned table, an explicit flag looks more acceptable.\r\nFor regular table, It seems we can work it out automatically, although \r\nI am not sure does anyone think it looks a bit inconsistent.\r\n\r\nBest regards,\r\nhouzj\r\n", "msg_date": "Mon, 21 Jun 2021 11:10:39 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Mon, Jun 21, 2021 at 12:56 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I thought if we scan a system catalog using DirtySnapshot, then we\n> should be able to find such a relation. But, if the system catalog is\n> updated after our scan then surely we won't be able to see it and in\n> that case, we won't be able to send invalidation. Now, say if the rel\n> is not visible to us because of the snapshot we used or due to some\n> race condition then we won't be able to send the invalidation but why\n> we want to consider it worse than the case where we miss such\n> invalidations (invalidations due to change of parallel-safe property)\n> when the insertion into relation is in-progress.\n\nA concurrent change is something quite different than a change that\nhappened some time in the past. We all know that DROP TABLE blocks if\nit is run while the table is in use, and everybody considers that\nacceptable, but if DROP TABLE were to block because the table was in\nuse at some previous time, everybody would complain, and rightly so.\nThe same principle applies here. It's not possible to react to a\nchange that happens in the middle of the query. Somebody could argue\nthat we ought to lock all the functions we're using against concurrent\nchanges so that attempts to change their properties block on a lock\nrather than succeeding. But given that that's not how it works, we can\nhardly go back in time and switch to a non-parallel plan after we've\nalready decided on a parallel one. On the other hand, we should be\nable to notice a change that has *already completed* at the time we do\nplanning. I don't see how we can blame failure to do that on anything\nother than bad coding.\n\n> Yeah, the session in which we are doing Alter Function won't be able\n> to lock it but it will wait for the AccessExclusiveLock on the rel to\n> be released because it will also try to acquire it before sending\n> invalidation.\n\nI think users would not be very happy with such behavior. Users accept\nthat if they try to access a relation, they might end up needing to\nwait for a lock on it, but what you are proposing here might make a\nsession block waiting for a lock on a relation which it never\nattempted to access.\n\nI think this whole line of attack is a complete dead-end. We can\ninvent new types of invalidations if we want, but they need to be sent\nbased on which objects actually got changed, not based on what we\nthink might be affected indirectly as a result of those changes. It's\nreasonable to regard something like a trigger or constraint as a\nproperty of the table because it is really a dependent object. It is\nassociated with precisely one table when it is created and the\nassociation can never be changed. On the other hand, functions clearly\nhave their own existence. They can be created and dropped\nindependently of any table and the tables with which they are\nassociated can change at any time. In that kind of situation,\ninvalidating the table based on changes to the function is riddled\nwith problems which I am pretty convinced we're never going to be able\nto solve. I'm not 100% sure what we ought to do here, but I'm pretty\nsure that looking up the tables that happen to be associated with the\nfunction in the session that is modifying the function is not it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Jun 2021 11:22:47 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Monday, June 21, 2021 11:23 PM Robert Haas <robertmhaas@gmail.com> wrote:\r\n> On Mon, Jun 21, 2021 at 12:56 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> > Yeah, the session in which we are doing Alter Function won't be able\r\n> > to lock it but it will wait for the AccessExclusiveLock on the rel to\r\n> > be released because it will also try to acquire it before sending\r\n> > invalidation.\r\n> \r\n> I think users would not be very happy with such behavior. Users accept that if\r\n> they try to access a relation, they might end up needing to wait for a lock on it,\r\n> but what you are proposing here might make a session block waiting for a lock\r\n> on a relation which it never attempted to access.\r\n> \r\n> I think this whole line of attack is a complete dead-end. We can invent new\r\n> types of invalidations if we want, but they need to be sent based on which\r\n> objects actually got changed, not based on what we think might be affected\r\n> indirectly as a result of those changes. It's reasonable to regard something like\r\n> a trigger or constraint as a property of the table because it is really a\r\n> dependent object. It is associated with precisely one table when it is created\r\n> and the association can never be changed. On the other hand, functions clearly\r\n> have their own existence. They can be created and dropped independently of\r\n> any table and the tables with which they are associated can change at any time.\r\n> In that kind of situation, invalidating the table based on changes to the function\r\n> is riddled with problems which I am pretty convinced we're never going to be\r\n> able to solve. I'm not 100% sure what we ought to do here, but I'm pretty sure\r\n> that looking up the tables that happen to be associated with the function in the\r\n> session that is modifying the function is not it.\r\n\r\nI agree that we should send invalid message like\r\n\" function OID's parallel safety has changed \". And when each session accept\r\nthis invalid message, each session needs to invalid the related table. Based on\r\nprevious mails, we only want to invalid the table that use this function in the\r\nindex expression/trigger/constraints. The problem is how to get all the related\r\ntables. Robert-san suggested cache a list of pg_proc OIDs, that means we need\r\nto rebuild the list everytime if the relcache is invalidated. The cost to do that\r\ncould be expensive, especially for extracting pg_proc OIDs from index expression,\r\nbecause we need to invoke index_open(index, lock) to get the index expression.\r\n\r\nOr, maybe we can let each session uses the pg_depend to get the related table and\r\ninvalid them after accepting the new type invalid message.\r\n\r\nBest regards,\r\nhouzj\r\n", "msg_date": "Tue, 22 Jun 2021 10:36:08 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Wed, Jun 16, 2021 at 8:57 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> I think the check of partition could be even more complicated if we need to\n> check the parallel safety of partition key expression. If user directly insert into\n> a partition, then we need invoke ExecPartitionCheck which will execute all it's\n> parent's and grandparent's partition key expressions. It means if we change a\n> parent table's partition key expression(by 1) change function in expr or 2) attach\n> the parent table as partition of another parent table), then we need to invalidate\n> all its child's relcache.\n>\n\nI think we already invalidate the child entries when we add/drop\nconstraints on a parent table. See ATAddCheckConstraint,\nATExecDropConstraint. If I am not missing anything then this case\nshouldn't be a problem. Do you have something else in mind?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 22 Jun 2021 17:54:49 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Tuesday, June 22, 2021 8:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Wed, Jun 16, 2021 at 8:57 AM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > I think the check of partition could be even more complicated if we\r\n> > need to check the parallel safety of partition key expression. If user\r\n> > directly insert into a partition, then we need invoke\r\n> > ExecPartitionCheck which will execute all it's parent's and\r\n> > grandparent's partition key expressions. It means if we change a\r\n> > parent table's partition key expression(by 1) change function in expr\r\n> > or 2) attach the parent table as partition of another parent table), then we\r\n> need to invalidate all its child's relcache.\r\n> >\r\n> \r\n> I think we already invalidate the child entries when we add/drop constraints on\r\n> a parent table. See ATAddCheckConstraint, ATExecDropConstraint. If I am not\r\n> missing anything then this case shouldn't be a problem. Do you have\r\n> something else in mind?\r\n\r\nCurrently, attach/detach a partition doesn't invalidate the child entries\r\nrecursively, except when detach a partition concurrently which will add a\r\nconstraint to all the child. Do you mean we can add the logic about\r\ninvalidating the child entries recursively when attach/detach a partition ?\r\n\r\nBest regards,\r\nhouzj\r\n", "msg_date": "Wed, 23 Jun 2021 01:04:51 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Wed, Jun 23, 2021 at 6:35 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, June 22, 2021 8:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Wed, Jun 16, 2021 at 8:57 AM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > I think the check of partition could be even more complicated if we\n> > > need to check the parallel safety of partition key expression. If user\n> > > directly insert into a partition, then we need invoke\n> > > ExecPartitionCheck which will execute all it's parent's and\n> > > grandparent's partition key expressions. It means if we change a\n> > > parent table's partition key expression(by 1) change function in expr\n> > > or 2) attach the parent table as partition of another parent table), then we\n> > need to invalidate all its child's relcache.\n> > >\n> >\n> > I think we already invalidate the child entries when we add/drop constraints on\n> > a parent table. See ATAddCheckConstraint, ATExecDropConstraint. If I am not\n> > missing anything then this case shouldn't be a problem. Do you have\n> > something else in mind?\n>\n> Currently, attach/detach a partition doesn't invalidate the child entries\n> recursively, except when detach a partition concurrently which will add a\n> constraint to all the child. Do you mean we can add the logic about\n> invalidating the child entries recursively when attach/detach a partition ?\n>\n\nI was talking about adding/dropping CHECK or other constraints on\npartitioned tables via Alter Table. I think if attach/detach leads to\nchange in constraints of child tables then either they should\ninvalidate child rels to avoid problems in the existing sessions or if\nit is not doing due to a reason then probably it might not matter. I\nsee that you have started a separate thread [1] to confirm the\nbehavior of attach/detach partition and we might want to decide based\non the conclusion of that thread.\n\n[1] - https://www.postgresql.org/message-id/OS3PR01MB5718DA1C4609A25186D1FBF194089%40OS3PR01MB5718.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 23 Jun 2021 11:44:13 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Wed, Jun 16, 2021 at 6:10 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, June 15, 2021 10:01 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > Now, maybe it could be done, and I think that's worth a little more thought. For\n> > example, perhaps whenever we invalidate a relation, we could also somehow\n> > send some new, special kind of invalidation for its parent saying, essentially,\n> > \"hey, one of your children has changed in a way you might care about.\" But\n> > that's not good enough, because it only goes up one level. The grandparent\n> > would still be unaware that a change it potentially cares about has occurred\n> > someplace down in the partitioning hierarchy. That seems hard to patch up,\n> > again because of the locking rules. The child can know the OID of its parent\n> > without locking the parent, but it can't know the OID of its grandparent without\n> > locking its parent. Walking up the whole partitioning hierarchy might be an\n> > issue for a number of reasons, including possible deadlocks, and possible race\n> > conditions where we don't emit all of the right invalidations in the face of\n> > concurrent changes. So I don't quite see a way around this part of the problem,\n> > but I may well be missing something. In fact I hope I am missing something,\n> > because solving this problem would be really nice.\n>\n> For partition, I think postgres already have the logic about recursively finding\n> the parent table[1]. Can we copy that logic to send serval invalid messages that\n> invalidate the parent and grandparent... relcache if change a partition's parallel safety ?\n> Although, it means we need more lock(on its parents) when the parallel safety\n> changed, but it seems it's not a frequent scenario and looks acceptable.\n>\n> [1] In generate_partition_qual()\n> parentrelid = get_partition_parent(RelationGetRelid(rel), true);\n> parent = relation_open(parentrelid, AccessShareLock);\n> ...\n> /* Add the parent's quals to the list (if any) */\n> if (parent->rd_rel->relispartition)\n> result = list_concat(generate_partition_qual(parent), my_qual);\n>\n\nAs shown by me in another email [1], such a coding pattern can lead to\ndeadlock. It is because in some DDL operations we walk the partition\nhierarchy from top to down and if we walk from bottom to upwards, then\nthat can lead to deadlock. I think this is a dangerous coding pattern\nand we shouldn't try to replicate it.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1LsFpjK5gL%2B0HEvoqB2DJVOi19vGAWbZBEx8fACOi5%2B_A%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 24 Jun 2021 08:40:25 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Wed, Jun 23, 2021 at 8:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Wed, Jun 16, 2021 at 6:10 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Tuesday, June 15, 2021 10:01 PM Robert Haas <robertmhaas@gmail.com>\n> wrote:\n> > >\n> > > Now, maybe it could be done, and I think that's worth a little more\n> thought. For\n> > > example, perhaps whenever we invalidate a relation, we could also\n> somehow\n> > > send some new, special kind of invalidation for its parent saying,\n> essentially,\n> > > \"hey, one of your children has changed in a way you might care about.\"\n> But\n> > > that's not good enough, because it only goes up one level. The\n> grandparent\n> > > would still be unaware that a change it potentially cares about has\n> occurred\n> > > someplace down in the partitioning hierarchy. That seems hard to patch\n> up,\n> > > again because of the locking rules. The child can know the OID of its\n> parent\n> > > without locking the parent, but it can't know the OID of its\n> grandparent without\n> > > locking its parent. Walking up the whole partitioning hierarchy might\n> be an\n> > > issue for a number of reasons, including possible deadlocks, and\n> possible race\n> > > conditions where we don't emit all of the right invalidations in the\n> face of\n> > > concurrent changes. So I don't quite see a way around this part of the\n> problem,\n> > > but I may well be missing something. In fact I hope I am missing\n> something,\n> > > because solving this problem would be really nice.\n> >\n> > For partition, I think postgres already have the logic about recursively\n> finding\n> > the parent table[1]. Can we copy that logic to send serval invalid\n> messages that\n> > invalidate the parent and grandparent... relcache if change a\n> partition's parallel safety ?\n> > Although, it means we need more lock(on its parents) when the parallel\n> safety\n> > changed, but it seems it's not a frequent scenario and looks acceptable.\n> >\n> > [1] In generate_partition_qual()\n> > parentrelid = get_partition_parent(RelationGetRelid(rel), true);\n> > parent = relation_open(parentrelid, AccessShareLock);\n> > ...\n> > /* Add the parent's quals to the list (if any) */\n> > if (parent->rd_rel->relispartition)\n> > result = list_concat(generate_partition_qual(parent),\n> my_qual);\n> >\n>\n> As shown by me in another email [1], such a coding pattern can lead to\n> deadlock. It is because in some DDL operations we walk the partition\n> hierarchy from top to down and if we walk from bottom to upwards, then\n> that can lead to deadlock. I think this is a dangerous coding pattern\n> and we shouldn't try to replicate it.\n>\n> [1] -\n> https://www.postgresql.org/message-id/CAA4eK1LsFpjK5gL%2B0HEvoqB2DJVOi19vGAWbZBEx8fACOi5%2B_A%40mail.gmail.com\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n>\n> Hi,\nHow about walking the partition hierarchy bottom up, recording the parents\nbut not taking the locks.\nOnce top-most parent is found, take the locks in reverse order (top down) ?\n\nCheers\n\nOn Wed, Jun 23, 2021 at 8:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Wed, Jun 16, 2021 at 6:10 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, June 15, 2021 10:01 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > Now, maybe it could be done, and I think that's worth a little more thought. For\n> > example, perhaps whenever we invalidate a relation, we could also somehow\n> > send some new, special kind of invalidation for its parent saying, essentially,\n> > \"hey, one of your children has changed in a way you might care about.\" But\n> > that's not good enough, because it only goes up one level. The grandparent\n> > would still be unaware that a change it potentially cares about has occurred\n> > someplace down in the partitioning hierarchy. That seems hard to patch up,\n> > again because of the locking rules. The child can know the OID of its parent\n> > without locking the parent, but it can't know the OID of its grandparent without\n> > locking its parent. Walking up the whole partitioning hierarchy might be an\n> > issue for a number of reasons, including possible deadlocks, and possible race\n> > conditions where we don't emit all of the right invalidations in the face of\n> > concurrent changes. So I don't quite see a way around this part of the problem,\n> > but I may well be missing something. In fact I hope I am missing something,\n> > because solving this problem would be really nice.\n>\n> For partition, I think postgres already have the logic about recursively finding\n> the parent table[1]. Can we copy that logic to send serval invalid messages that\n> invalidate the parent and grandparent... relcache if change a partition's parallel safety ?\n> Although, it means we need more lock(on its parents) when the parallel safety\n> changed, but it seems it's not a frequent scenario and looks acceptable.\n>\n> [1] In generate_partition_qual()\n>         parentrelid = get_partition_parent(RelationGetRelid(rel), true);\n>         parent = relation_open(parentrelid, AccessShareLock);\n>         ...\n>         /* Add the parent's quals to the list (if any) */\n>         if (parent->rd_rel->relispartition)\n>                 result = list_concat(generate_partition_qual(parent), my_qual);\n>\n\nAs shown by me in another email [1], such a coding pattern can lead to\ndeadlock. It is because in some DDL operations we walk the partition\nhierarchy from top to down and if we walk from bottom to upwards, then\nthat can lead to deadlock. I think this is a dangerous coding pattern\nand we shouldn't try to replicate it.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1LsFpjK5gL%2B0HEvoqB2DJVOi19vGAWbZBEx8fACOi5%2B_A%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\nHi,How about walking the partition hierarchy bottom up, recording the parents but not taking the locks.Once top-most parent is found, take the locks in reverse order (top down) ?Cheers", "msg_date": "Wed, 23 Jun 2021 20:43:30 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Thu, Jun 24, 2021 at 1:38 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> How about walking the partition hierarchy bottom up, recording the parents but not taking the locks.\n> Once top-most parent is found, take the locks in reverse order (top down) ?\n>\n\nIs it safe to walk up the partition hierarchy (to record the parents\nfor the eventual locking in reverse order) without taking locks?\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Thu, 24 Jun 2021 13:55:14 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Thursday, June 24, 2021 11:44 AM Zhihong Yu <zyu@yugabyte.com> wrote:\r\n> Hi,\r\n> How about walking the partition hierarchy bottom up, recording the parents but not taking the locks.\r\n> Once top-most parent is found, take the locks in reverse order (top down) ?\r\n\r\nIMO, When we directly INSERT INTO a partition, postgres already lock the partition\r\nas the target table before execution which means we cannot postpone the lock\r\non partition to find the parent table.\r\n\r\nBest regards,\r\nhouzj\r\n", "msg_date": "Thu, 24 Jun 2021 04:19:47 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Mon, Jun 21, 2021 at 4:40 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> To be honest, I didn't find a cheap way to invalidate partitioned table's\n> parallel safety automatically.\n>\n\nI also don't see the feasibility for doing parallelism checks for\npartitioned tables both because it is expensive due to\ntraversing/locking all the partitions and then the invalidations are\ndifficult to handle due to deadlock hazards as discussed above.\n\nLet me try to summarize the discussion so far and see if we can have\nany better ideas than what we have discussed so far or we want to go\nwith one of the ideas discussed till now. I think we have broadly\ndiscussed two approaches (a) to automatically decide whether\nparallelism can be enabled for inserts, (b) provide an option to the\nuser to specify whether inserts can be parallelized on a relation.\n\nFor the first approach (a), we have evaluated both the partitioned and\nnon-partitioned relation cases. For non-partitioned relations, we can\ncompute the parallel-safety of relation during the planning and save\nit in the relation cache entry. This is normally safe because we have\na lock on the relation and any change to the relation should raise an\ninvalidation which will lead to re-computation of parallel-safety\ninformation for a relation. Now, there are cases where the\nparallel-safety of some trigger function or a function used in index\nexpression can be changed by the user which won't register an\ninvalidation for a relation. To handle such cases, we can register a\nnew kind of invalidation only when a function's parallel-safety\ninformation is changed. And every backend in the same database then\nneeds to re-evaluate the parallel-safety of every relation for which\nit has cached a value. For partitioned relations, the similar idea\nwon't work because of multiple reasons (a) We need to traverse and\nlock all the partitions to compute the parallel-safety of the root\nrelation which could be very expensive; (b) Whenever we invalidate a\nparticular partition, we need to invalidate its parent hierarchy as\nwell. We can't traverse the parent hierarchy without taking locks on\nthe parent table which can lead to deadlock. The alternative could be\nthat for partitioned relations we can rely on the user-specified\ninformation about parallel-safety (like the approach-b mentioned in\nthe previous paragraph). We can additionally check the parallel safety\nof partitions when we are trying to insert into a particular partition\nand error out if we detect any parallel-unsafe clause and we are in\nparallel-mode. So, in this case, we won't be completely relying on the\nusers. Users can either change the parallel safe option of the table\nor remove/change the parallel-unsafe clause after an error.\n\nFor the second approach (b), we can provide an option to the user to\nspecify whether inserts (or other dml's) can be parallelized for a\nrelation. One of the ideas is to provide some options like below to\nthe user:\nCREATE TABLE table_name (...) PARALLEL DML { UNSAFE | RESTRICTED | SAFE };\nALTER TABLE table_name PARALLEL DML { UNSAFE | RESTRICTED | SAFE };\n\nThis property is recorded in pg_class's relparallel column as 'u',\n'r', or 's', just like pg_proc's proparallel. The default is UNSAFE.\nAdditionally, provide a function pg_get_parallel_safety(oid) using\nwhich users can determine whether it is safe to enable parallelism.\nSurely, after the user has checked with that function, one can add\nsome unsafe constraints to the table by altering the table but it will\nstill be an aid to enable parallelism on a relation.\n\nThe first approach (a) has an appeal because it would allow to\nautomatically parallelize inserts in many cases but might have some\noverhead in some cases due to processing of relcache entries after the\nparallel-safety of the relation is changed. The second approach (b)\nhas an appeal because of its consistent behavior for partitioned and\nnon-partitioned relations.\n\nAmong the above options, I would personally prefer (b) mainly because\nof the consistent handling for partition and non-partition table cases\nbut I am fine with approach (a) as well if that is what other people\nfeel is better.\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 28 Jun 2021 15:21:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Mon, Jun 28, 2021 at 7:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Among the above options, I would personally prefer (b) mainly because\n> of the consistent handling for partition and non-partition table cases\n> but I am fine with approach (a) as well if that is what other people\n> feel is better.\n>\n> Thoughts?\n>\n\nI personally think \"(b) provide an option to the user to specify\nwhether inserts can be parallelized on a relation\" is the preferable\noption.\nThere seems to be too many issues with the alternative of trying to\ndetermine the parallel-safety of a partitioned table automatically.\nI think (b) is the simplest and most consistent approach, working the\nsame way for all table types, and without the overhead of (a).\nAlso, I don't think (b) is difficult for the user. At worst, the user\ncan use the provided utility-functions at development-time to verify\nthe intended declared table parallel-safety.\nI can't really see some mixture of (a) and (b) being acceptable.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Thu, 1 Jul 2021 13:46:03 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Wed, Jun 30, 2021 at 11:46 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> I personally think \"(b) provide an option to the user to specify\n> whether inserts can be parallelized on a relation\" is the preferable\n> option.\n> There seems to be too many issues with the alternative of trying to\n> determine the parallel-safety of a partitioned table automatically.\n> I think (b) is the simplest and most consistent approach, working the\n> same way for all table types, and without the overhead of (a).\n> Also, I don't think (b) is difficult for the user. At worst, the user\n> can use the provided utility-functions at development-time to verify\n> the intended declared table parallel-safety.\n> I can't really see some mixture of (a) and (b) being acceptable.\n\nYeah, I'd like to have it be automatic, but I don't have a clear idea\nhow to make that work nicely. It's possible somebody (Tom?) can\nsuggest something that I'm overlooking, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 2 Jul 2021 10:46:30 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Fri, Jul 2, 2021 at 8:16 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Jun 30, 2021 at 11:46 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > I personally think \"(b) provide an option to the user to specify\n> > whether inserts can be parallelized on a relation\" is the preferable\n> > option.\n> > There seems to be too many issues with the alternative of trying to\n> > determine the parallel-safety of a partitioned table automatically.\n> > I think (b) is the simplest and most consistent approach, working the\n> > same way for all table types, and without the overhead of (a).\n> > Also, I don't think (b) is difficult for the user. At worst, the user\n> > can use the provided utility-functions at development-time to verify\n> > the intended declared table parallel-safety.\n> > I can't really see some mixture of (a) and (b) being acceptable.\n>\n> Yeah, I'd like to have it be automatic, but I don't have a clear idea\n> how to make that work nicely. It's possible somebody (Tom?) can\n> suggest something that I'm overlooking, though.\n\nIn general, for the non-partitioned table, where we don't have much\noverhead of checking the parallel safety and invalidation is also not\na big problem so I am tempted to provide an automatic parallel safety\ncheck. This would enable parallelism for more cases wherever it is\nsuitable without user intervention. OTOH, I understand that providing\nautomatic checking might be very costly if the number of partitions is\nmore. Can't we provide some mid-way where the parallelism is enabled\nby default for the normal table but for the partitioned table it is\ndisabled by default and the user has to set it safe for enabling\nparallelism? I agree that such behavior might sound a bit hackish.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 4 Jul 2021 11:13:53 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Sunday, July 4, 2021 1:44 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\r\n> On Fri, Jul 2, 2021 at 8:16 PM Robert Haas <robertmhaas@gmail.com> wrote:\r\n> >\r\n> > On Wed, Jun 30, 2021 at 11:46 PM Greg Nancarrow <gregn4422@gmail.com>\r\n> wrote:\r\n> > > I personally think \"(b) provide an option to the user to specify\r\n> > > whether inserts can be parallelized on a relation\" is the preferable\r\n> > > option.\r\n> > > There seems to be too many issues with the alternative of trying to\r\n> > > determine the parallel-safety of a partitioned table automatically.\r\n> > > I think (b) is the simplest and most consistent approach, working\r\n> > > the same way for all table types, and without the overhead of (a).\r\n> > > Also, I don't think (b) is difficult for the user. At worst, the\r\n> > > user can use the provided utility-functions at development-time to\r\n> > > verify the intended declared table parallel-safety.\r\n> > > I can't really see some mixture of (a) and (b) being acceptable.\r\n> >\r\n> > Yeah, I'd like to have it be automatic, but I don't have a clear idea\r\n> > how to make that work nicely. It's possible somebody (Tom?) can\r\n> > suggest something that I'm overlooking, though.\r\n> \r\n> In general, for the non-partitioned table, where we don't have much overhead\r\n> of checking the parallel safety and invalidation is also not a big problem so I am\r\n> tempted to provide an automatic parallel safety check. This would enable\r\n> parallelism for more cases wherever it is suitable without user intervention.\r\n> OTOH, I understand that providing automatic checking might be very costly if\r\n> the number of partitions is more. Can't we provide some mid-way where the\r\n> parallelism is enabled by default for the normal table but for the partitioned\r\n> table it is disabled by default and the user has to set it safe for enabling\r\n> parallelism? I agree that such behavior might sound a bit hackish.\r\n\r\nAbout the invalidation for non-partitioned table, I think it still has a\r\nproblem: When a function's parallel safety changed, it's expensive to judge\r\nwhether the function is related to index or trigger or some table-related\r\nobjects by using pg_depend, because we can only do the judgement in each\r\nbackend when accept a invalidation message. If we don't do that, it means\r\nwhenever a function's parallel safety changed, we invalidate every relation's\r\ncached safety which looks not very nice to me.\r\n\r\nSo, I personally think \"(b) provide an option to the user to specify whether\r\ninserts can be parallelized on a relation\" is the preferable option.\r\n\r\nBest regards,\r\nhouzj\r\n", "msg_date": "Tue, 6 Jul 2021 01:42:20 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Sun, Jul 4, 2021 at 1:44 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> In general, for the non-partitioned table, where we don't have much\n> overhead of checking the parallel safety and invalidation is also not\n> a big problem so I am tempted to provide an automatic parallel safety\n> check. This would enable parallelism for more cases wherever it is\n> suitable without user intervention. OTOH, I understand that providing\n> automatic checking might be very costly if the number of partitions is\n> more. Can't we provide some mid-way where the parallelism is enabled\n> by default for the normal table but for the partitioned table it is\n> disabled by default and the user has to set it safe for enabling\n> parallelism? I agree that such behavior might sound a bit hackish.\n\nI think that's basically the proposal that Amit and I have been discussing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 20 Jul 2021 15:00:01 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Wed, Jul 21, 2021 at 12:30 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sun, Jul 4, 2021 at 1:44 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > In general, for the non-partitioned table, where we don't have much\n> > overhead of checking the parallel safety and invalidation is also not\n> > a big problem so I am tempted to provide an automatic parallel safety\n> > check. This would enable parallelism for more cases wherever it is\n> > suitable without user intervention. OTOH, I understand that providing\n> > automatic checking might be very costly if the number of partitions is\n> > more. Can't we provide some mid-way where the parallelism is enabled\n> > by default for the normal table but for the partitioned table it is\n> > disabled by default and the user has to set it safe for enabling\n> > parallelism? I agree that such behavior might sound a bit hackish.\n>\n> I think that's basically the proposal that Amit and I have been discussing.\n>\n\nI see here we have a mix of opinions from various people. Dilip seems\nto be favoring the approach where we provide some option to the user\nfor partitioned tables and automatic behavior for non-partitioned\ntables but he also seems to have mild concerns about this behavior.\nOTOH, Greg and Hou-San seem to favor an approach where we can provide\nan option to the user for both partitioned and non-partitioned tables.\nI am also in favor of providing an option to the user for the sake of\nconsistency in behavior and not trying to introduce a special kind of\ninvalidation as it doesn't serve the purpose for partitioned tables.\nRobert seems to be in favor of automatic behavior but it is not very\nclear to me if he is fine with dealing differently for partitioned and\nnon-partitioned relations. Robert, can you please provide your opinion\non what do you think is the best way to move forward here?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 22 Jul 2021 09:25:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Wed, Jul 21, 2021 at 11:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I see here we have a mix of opinions from various people. Dilip seems\n> to be favoring the approach where we provide some option to the user\n> for partitioned tables and automatic behavior for non-partitioned\n> tables but he also seems to have mild concerns about this behavior.\n> OTOH, Greg and Hou-San seem to favor an approach where we can provide\n> an option to the user for both partitioned and non-partitioned tables.\n> I am also in favor of providing an option to the user for the sake of\n> consistency in behavior and not trying to introduce a special kind of\n> invalidation as it doesn't serve the purpose for partitioned tables.\n> Robert seems to be in favor of automatic behavior but it is not very\n> clear to me if he is fine with dealing differently for partitioned and\n> non-partitioned relations. Robert, can you please provide your opinion\n> on what do you think is the best way to move forward here?\n\nI thought we had agreed on handling partitioned and unpartitioned\ntables differently, but maybe I misunderstood the discussion.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 23 Jul 2021 09:25:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Fri, Jul 23, 2021 at 6:55 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Jul 21, 2021 at 11:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I see here we have a mix of opinions from various people. Dilip seems\n> > to be favoring the approach where we provide some option to the user\n> > for partitioned tables and automatic behavior for non-partitioned\n> > tables but he also seems to have mild concerns about this behavior.\n> > OTOH, Greg and Hou-San seem to favor an approach where we can provide\n> > an option to the user for both partitioned and non-partitioned tables.\n> > I am also in favor of providing an option to the user for the sake of\n> > consistency in behavior and not trying to introduce a special kind of\n> > invalidation as it doesn't serve the purpose for partitioned tables.\n> > Robert seems to be in favor of automatic behavior but it is not very\n> > clear to me if he is fine with dealing differently for partitioned and\n> > non-partitioned relations. Robert, can you please provide your opinion\n> > on what do you think is the best way to move forward here?\n>\n> I thought we had agreed on handling partitioned and unpartitioned\n> tables differently, but maybe I misunderstood the discussion.\n>\n\nI think for the consistency argument how about allowing users to\nspecify a parallel-safety option for both partitioned and\nnon-partitioned relations but for non-partitioned relations if users\ndidn't specify, it would be computed automatically? If the user has\nspecified parallel-safety option for non-partitioned relation then we\nwould consider that instead of computing the value by ourselves.\n\nAnother reason for hesitation to do automatically for non-partitioned\nrelations was the new invalidation which will invalidate the cached\nparallel-safety for all relations in relcache for a particular\ndatabase. As mentioned by Hou-San [1], it seems we need to do this\nwhenever any function's parallel-safety is changed. OTOH, changing\nparallel-safety for a function is probably not that often to matter in\npractice which is why I think you seem to be fine with this idea. So,\nI think, on that premise, it is okay to go ahead with different\nhandling for partitioned and non-partitioned relations here.\n\n[1] - https://www.postgresql.org/message-id/OS0PR01MB5716EC1D07ACCA24373C2557941B9%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 24 Jul 2021 15:22:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Sat, Jul 24, 2021 at 5:52 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I think for the consistency argument how about allowing users to\n> specify a parallel-safety option for both partitioned and\n> non-partitioned relations but for non-partitioned relations if users\n> didn't specify, it would be computed automatically? If the user has\n> specified parallel-safety option for non-partitioned relation then we\n> would consider that instead of computing the value by ourselves.\n\nHaving the option for both partitioned and non-partitioned tables\ndoesn't seem like the worst idea ever, but I am also not entirely sure\nthat I understand the point.\n\n> Another reason for hesitation to do automatically for non-partitioned\n> relations was the new invalidation which will invalidate the cached\n> parallel-safety for all relations in relcache for a particular\n> database. As mentioned by Hou-San [1], it seems we need to do this\n> whenever any function's parallel-safety is changed. OTOH, changing\n> parallel-safety for a function is probably not that often to matter in\n> practice which is why I think you seem to be fine with this idea.\n\nRight. I think it should be quite rare, and invalidation events are\nalso not crazy expensive. We can test what the worst case is, but if\nyou have to sit there and run ALTER FUNCTION in a tight loop to see a\nmeasurable performance impact, it's not a real problem. There may be a\ncode complexity argument against trying to figure it out\nautomatically, perhaps, but I don't think there's a big performance\nissue.\n\nWhat bothers me is that if this is something people have to set\nmanually then many people won't and will not get the benefit of the\nfeature. And some of them will also set it incorrectly and have\nproblems. So I am in favor of trying to determine it automatically\nwhere possible, to make it easy for people. However, other people may\nfeel differently, and I'm not trying to say they're necessarily wrong.\nI'm just telling you what I think.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 26 Jul 2021 11:02:48 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Mon, Jul 26, 2021 at 8:33 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sat, Jul 24, 2021 at 5:52 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I think for the consistency argument how about allowing users to\n> > specify a parallel-safety option for both partitioned and\n> > non-partitioned relations but for non-partitioned relations if users\n> > didn't specify, it would be computed automatically? If the user has\n> > specified parallel-safety option for non-partitioned relation then we\n> > would consider that instead of computing the value by ourselves.\n>\n> Having the option for both partitioned and non-partitioned tables\n> doesn't seem like the worst idea ever, but I am also not entirely sure\n> that I understand the point.\n>\n\nConsider below ways to allow the user to specify the parallel-safety option:\n\n(a)\nCREATE TABLE table_name (...) PARALLEL DML { UNSAFE | RESTRICTED | SAFE } ...\nALTER TABLE table_name PARALLEL DML { UNSAFE | RESTRICTED | SAFE } ..\n\nOR\n\n(b)\nCREATE TABLE table_name (...) WITH (parallel_dml_enabled = true)\nALTER TABLE table_name (...) WITH (parallel_dml_enabled = true)\n\nThe point was what should we do if the user specifies the option for a\nnon-partitioned table. Do we just ignore it or give an error that this\nis not a valid syntax/option when used with non-partitioned tables? I\nfind it slightly odd that this option works for partitioned tables but\ngives an error for non-partitioned tables but maybe we can document\nit.\n\nWith the above syntax, even if the user doesn't specify the\nparallelism option for non-partitioned relations, we will determine it\nautomatically. Now, in some situations, users might want to force\nparallelism even when we wouldn't have chosen it automatically. It is\npossible that she might face an error due to some parallel-unsafe\nfunction but OTOH, she might have ensured that it is safe to choose\nparallelism in her particular case.\n\n> > Another reason for hesitation to do automatically for non-partitioned\n> > relations was the new invalidation which will invalidate the cached\n> > parallel-safety for all relations in relcache for a particular\n> > database. As mentioned by Hou-San [1], it seems we need to do this\n> > whenever any function's parallel-safety is changed. OTOH, changing\n> > parallel-safety for a function is probably not that often to matter in\n> > practice which is why I think you seem to be fine with this idea.\n>\n> Right. I think it should be quite rare, and invalidation events are\n> also not crazy expensive. We can test what the worst case is, but if\n> you have to sit there and run ALTER FUNCTION in a tight loop to see a\n> measurable performance impact, it's not a real problem. There may be a\n> code complexity argument against trying to figure it out\n> automatically, perhaps, but I don't think there's a big performance\n> issue.\n>\n\nTrue, there could be some code complexity but I think we can see once\nthe patch is ready for review.\n\n> What bothers me is that if this is something people have to set\n> manually then many people won't and will not get the benefit of the\n> feature. And some of them will also set it incorrectly and have\n> problems. So I am in favor of trying to determine it automatically\n> where possible, to make it easy for people. However, other people may\n> feel differently, and I'm not trying to say they're necessarily wrong.\n> I'm just telling you what I think.\n>\n\nThanks for all your suggestions and feedback.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 27 Jul 2021 10:44:14 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Tue, Jul 27, 2021 at 10:44 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jul 26, 2021 at 8:33 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Consider below ways to allow the user to specify the parallel-safety option:\n>\n> (a)\n> CREATE TABLE table_name (...) PARALLEL DML { UNSAFE | RESTRICTED | SAFE } ...\n> ALTER TABLE table_name PARALLEL DML { UNSAFE | RESTRICTED | SAFE } ..\n>\n> OR\n>\n> (b)\n> CREATE TABLE table_name (...) WITH (parallel_dml_enabled = true)\n> ALTER TABLE table_name (...) WITH (parallel_dml_enabled = true)\n>\n> The point was what should we do if the user specifies the option for a\n> non-partitioned table. Do we just ignore it or give an error that this\n> is not a valid syntax/option when used with non-partitioned tables? I\n> find it slightly odd that this option works for partitioned tables but\n> gives an error for non-partitioned tables but maybe we can document\n> it.\n\nIMHO, for a non-partitioned table, we should be default allow the\nparallel safely checking so that users don't have to set it for\nindividual tables, OTOH, I don't think that there is any point in\nblocking the syntax for the non-partitioned table, So I think for the\nnon-partitioned table if the user hasn't set it we should do automatic\nsafety checking and if the user has defined the safety externally then\nwe should respect that. And for the partitioned table, we will never\ndo the automatic safety checking and we should always respect what the\nuser has set.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Jul 2021 11:28:32 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Tue, Jul 27, 2021 at 11:28 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Jul 27, 2021 at 10:44 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Jul 26, 2021 at 8:33 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Consider below ways to allow the user to specify the parallel-safety option:\n> >\n> > (a)\n> > CREATE TABLE table_name (...) PARALLEL DML { UNSAFE | RESTRICTED | SAFE } ...\n> > ALTER TABLE table_name PARALLEL DML { UNSAFE | RESTRICTED | SAFE } ..\n> >\n> > OR\n> >\n> > (b)\n> > CREATE TABLE table_name (...) WITH (parallel_dml_enabled = true)\n> > ALTER TABLE table_name (...) WITH (parallel_dml_enabled = true)\n> >\n> > The point was what should we do if the user specifies the option for a\n> > non-partitioned table. Do we just ignore it or give an error that this\n> > is not a valid syntax/option when used with non-partitioned tables? I\n> > find it slightly odd that this option works for partitioned tables but\n> > gives an error for non-partitioned tables but maybe we can document\n> > it.\n>\n> IMHO, for a non-partitioned table, we should be default allow the\n> parallel safely checking so that users don't have to set it for\n> individual tables, OTOH, I don't think that there is any point in\n> blocking the syntax for the non-partitioned table, So I think for the\n> non-partitioned table if the user hasn't set it we should do automatic\n> safety checking and if the user has defined the safety externally then\n> we should respect that. And for the partitioned table, we will never\n> do the automatic safety checking and we should always respect what the\n> user has set.\n>\n\nThis is exactly what I am saying. BTW, do you have any preference for\nthe syntax among (a) or (b)?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 27 Jul 2021 14:05:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Tue, Jul 27, 2021 at 3:58 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> IMHO, for a non-partitioned table, we should be default allow the\n> parallel safely checking so that users don't have to set it for\n> individual tables, OTOH, I don't think that there is any point in\n> blocking the syntax for the non-partitioned table, So I think for the\n> non-partitioned table if the user hasn't set it we should do automatic\n> safety checking and if the user has defined the safety externally then\n> we should respect that. And for the partitioned table, we will never\n> do the automatic safety checking and we should always respect what the\n> user has set.\n>\n\nProvided it is possible to distinguish between the default\nparallel-safety (unsafe) and that default being explicitly specified\nby the user, it should be OK.\nIn the case of performing the automatic parallel-safety checking and\nthe table using something that is parallel-unsafe, there will be a\nperformance degradation compared to the current code (hopefully only\nsmall). That can be avoided by the user explicitly specifying that\nit's parallel-unsafe.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Tue, 27 Jul 2021 20:30:29 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Tue, Jul 27, 2021 at 4:00 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Tue, Jul 27, 2021 at 3:58 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > IMHO, for a non-partitioned table, we should be default allow the\n> > parallel safely checking so that users don't have to set it for\n> > individual tables, OTOH, I don't think that there is any point in\n> > blocking the syntax for the non-partitioned table, So I think for the\n> > non-partitioned table if the user hasn't set it we should do automatic\n> > safety checking and if the user has defined the safety externally then\n> > we should respect that. And for the partitioned table, we will never\n> > do the automatic safety checking and we should always respect what the\n> > user has set.\n> >\n>\n> Provided it is possible to distinguish between the default\n> parallel-safety (unsafe) and that default being explicitly specified\n> by the user, it should be OK.\n>\n\nOffhand, I don't see any problem with this. Do you have something\nspecific in mind?\n\n> In the case of performing the automatic parallel-safety checking and\n> the table using something that is parallel-unsafe, there will be a\n> performance degradation compared to the current code (hopefully only\n> small). That can be avoided by the user explicitly specifying that\n> it's parallel-unsafe.\n>\n\nTrue, but I guess this should be largely addressed by caching the\nvalue of parallel safety at the relation level. Sure, there will be\nsome cost the first time we compute it but on consecutive accesses, it\nshould be quite cheap.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 27 Jul 2021 17:06:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On July 27, 2021 1:14 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> On Mon, Jul 26, 2021 at 8:33 PM Robert Haas <robertmhaas@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Sat, Jul 24, 2021 at 5:52 AM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > > I think for the consistency argument how about allowing users to\r\n> > > specify a parallel-safety option for both partitioned and\r\n> > > non-partitioned relations but for non-partitioned relations if users\r\n> > > didn't specify, it would be computed automatically? If the user has\r\n> > > specified parallel-safety option for non-partitioned relation then we\r\n> > > would consider that instead of computing the value by ourselves.\r\n> >\r\n> > Having the option for both partitioned and non-partitioned tables\r\n> > doesn't seem like the worst idea ever, but I am also not entirely sure\r\n> > that I understand the point.\r\n> >\r\n> \r\n> Consider below ways to allow the user to specify the parallel-safety option:\r\n> \r\n> (a)\r\n> CREATE TABLE table_name (...) PARALLEL DML { UNSAFE | RESTRICTED | SAFE } ...\r\n> ALTER TABLE table_name PARALLEL DML { UNSAFE | RESTRICTED | SAFE } ..\r\n> \r\n> OR\r\n> \r\n> (b)\r\n> CREATE TABLE table_name (...) WITH (parallel_dml_enabled = true)\r\n> ALTER TABLE table_name (...) WITH (parallel_dml_enabled = true)\r\n\r\nPersonally, I think the approach (a) might be better. Since it's similar to\r\nALTER FUNCTION PARALLEL XXX which user might be more familiar with.\r\n\r\nBesides, I think we need a new default value about parallel dml safety. Maybe\r\n'auto' or 'null'(different from safe/restricted/unsafe). Because, user is\r\nlikely to alter the safety to the default value to get the automatic safety\r\ncheck, a independent default value can make it more clear.\r\n\r\nBest regards,\r\nHouzj\r\n\r\n", "msg_date": "Wed, 28 Jul 2021 02:52:39 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "On Wed, Jul 28, 2021 at 12:52 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> > Consider below ways to allow the user to specify the parallel-safety option:\n> >\n> > (a)\n> > CREATE TABLE table_name (...) PARALLEL DML { UNSAFE | RESTRICTED | SAFE } ...\n> > ALTER TABLE table_name PARALLEL DML { UNSAFE | RESTRICTED | SAFE } ..\n> >\n> > OR\n> >\n> > (b)\n> > CREATE TABLE table_name (...) WITH (parallel_dml_enabled = true)\n> > ALTER TABLE table_name (...) WITH (parallel_dml_enabled = true)\n>\n> Personally, I think the approach (a) might be better. Since it's similar to\n> ALTER FUNCTION PARALLEL XXX which user might be more familiar with.\n>\n\nI think so too.\n\n> Besides, I think we need a new default value about parallel dml safety. Maybe\n> 'auto' or 'null'(different from safe/restricted/unsafe). Because, user is\n> likely to alter the safety to the default value to get the automatic safety\n> check, a independent default value can make it more clear.\n>\n\nYes, I was thinking something similar when I said \"Provided it is\npossible to distinguish between the default parallel-safety (unsafe)\nand that default being explicitly specified by the user\". If we don't\nhave a new default value, then we need to distinguish these cases, but\nI'm not sure Postgres does something similar elsewhere (for example,\nfor function parallel-safety, it's not currently recorded whether\nparallel-safety=unsafe is because of the default or because the user\nspecifically set it to what is the default value).\nOpinions?\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Wed, 28 Jul 2021 13:20:47 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] Missed parallel safety checks, and wrong parallel safety" }, { "msg_contents": "Note: Changing the subject as I felt the topic has diverted from the\noriginal reported case and also it might help others to pay attention.\n\nOn Wed, Jul 28, 2021 at 8:22 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n> >\n> > Consider below ways to allow the user to specify the parallel-safety option:\n> >\n> > (a)\n> > CREATE TABLE table_name (...) PARALLEL DML { UNSAFE | RESTRICTED | SAFE } ...\n> > ALTER TABLE table_name PARALLEL DML { UNSAFE | RESTRICTED | SAFE } ..\n> >\n> > OR\n> >\n> > (b)\n> > CREATE TABLE table_name (...) WITH (parallel_dml_enabled = true)\n> > ALTER TABLE table_name (...) WITH (parallel_dml_enabled = true)\n>\n> Personally, I think the approach (a) might be better. Since it's similar to\n> ALTER FUNCTION PARALLEL XXX which user might be more familiar with.\n>\n\nOkay, and I think for (b) true/false won't be sufficient because one\nmight want to specify restricted.\n\n> Besides, I think we need a new default value about parallel dml safety. Maybe\n> 'auto' or 'null'(different from safe/restricted/unsafe). Because, user is\n> likely to alter the safety to the default value to get the automatic safety\n> check, a independent default value can make it more clear.\n>\n\nHmm, but auto won't work for partitioned tables, right? If so, that\nmight appear like an inconsistency to the user and we need to document\nthe same. Let me summarize the discussion so far in this thread so\nthat it is helpful to others.\n\nWe would like to parallelize INSERT SELECT (first step INSERT +\nparallel SELECT and then Parallel (INSERT + SELECT)) and for that, we\nhave explored a couple of ways. The first approach is to automatically\ndetect if it is safe to parallelize insert and then do it without user\nintervention. To detect automatically, we need to determine the\nparallel-safety of various expressions (like default column\nexpressions, check constraints, index expressions, etc.) at the\nplanning time which can be costly but we can avoid most of the cost if\nwe cache the parallel safety for the relation. So, the cost needs to\nbe paid just once. Now, we can't cache this for partitioned relations\nbecause it can be very costly (as we need to lock all the partitions)\nand has deadlock risks (while processing invalidation), this has been\nexplained in email [1].\n\nNow, as we can't think of a nice way to determine parallel safety\nautomatically for partitioned relations, we thought of providing an\noption to the user. The next thing to decide here is that if we are\nproviding an option to the user in one of the ways as mentioned above\nin the email, what should we do if the user uses that option for\nnon-partitioned relations, shall we just ignore it or give an error\nthat this is not a valid syntax/option? The one idea which Dilip and I\nare advocating is to respect the user's input for non-partitioned\nrelations and if it is not given then compute the parallel safety and\ncache it.\n\nTo facilitate users for providing a parallel-safety option, we are\nthinking to provide a utility function\n\"pg_get_table_parallel_dml_safety(regclass)\" that\nreturns records of (objid, classid, parallel_safety) for all parallel\nunsafe/restricted table-related objects from which the table's\nparallel DML safety is determined. This will allow user to identify\nunsafe objects and if the required user can change the parallel safety\nof required functions and then use the parallel safety option for the\ntable.\n\nThoughts?\n\nNote - This topic has been discussed in another thread as well [2] but\nas many of the key technical points have been discussed here I thought\nit is better to continue here.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1Jwz8xGss4b0-33eyX0i5W_1CnqT16DjB9snVC--DoOsQ%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/TYAPR01MB29905A9AB82CC8BA50AB0F80FE709%40TYAPR01MB2990.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 30 Jul 2021 11:32:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Parallel Inserts (WAS: [bug?] Missed parallel safety checks..)" }, { "msg_contents": "On Fri, Jul 30, 2021 at 4:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > Besides, I think we need a new default value about parallel dml safety. Maybe\n> > 'auto' or 'null'(different from safe/restricted/unsafe). Because, user is\n> > likely to alter the safety to the default value to get the automatic safety\n> > check, a independent default value can make it more clear.\n> >\n>\n> Hmm, but auto won't work for partitioned tables, right? If so, that\n> might appear like an inconsistency to the user and we need to document\n> the same. Let me summarize the discussion so far in this thread so\n> that it is helpful to others.\n>\n\nTo avoid that inconsistency, UNSAFE could be the default for\npartitioned tables (and we would disallow setting AUTO for these).\nSo then AUTO is the default for non-partitioned tables only.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Fri, 30 Jul 2021 16:52:08 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Inserts (WAS: [bug?] Missed parallel safety checks..)" }, { "msg_contents": "On Friday, July 30, 2021 2:52 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\r\n> On Fri, Jul 30, 2021 at 4:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > > Besides, I think we need a new default value about parallel dml\r\n> > > safety. Maybe 'auto' or 'null'(different from\r\n> > > safe/restricted/unsafe). Because, user is likely to alter the safety\r\n> > > to the default value to get the automatic safety check, a independent default\r\n> > > value can make it more clear.\r\n> > >\r\n> >\r\n> > Hmm, but auto won't work for partitioned tables, right? If so, that\r\n> > might appear like an inconsistency to the user and we need to document\r\n> > the same. Let me summarize the discussion so far in this thread so\r\n> > that it is helpful to others.\r\n> >\r\n> \r\n> To avoid that inconsistency, UNSAFE could be the default for partitioned tables\r\n> (and we would disallow setting AUTO for these).\r\n> So then AUTO is the default for non-partitioned tables only.\r\n\r\nI think this approach is reasonable, +1.\r\n\r\nBest regards,\r\nhouzj \r\n", "msg_date": "Fri, 30 Jul 2021 13:23:40 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Parallel Inserts (WAS: [bug?] Missed parallel safety checks..)" }, { "msg_contents": "On Fri, Jul 30, 2021 at 6:53 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Friday, July 30, 2021 2:52 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > On Fri, Jul 30, 2021 at 4:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > > Besides, I think we need a new default value about parallel dml\n> > > > safety. Maybe 'auto' or 'null'(different from\n> > > > safe/restricted/unsafe). Because, user is likely to alter the safety\n> > > > to the default value to get the automatic safety check, a independent default\n> > > > value can make it more clear.\n> > > >\n> > >\n> > > Hmm, but auto won't work for partitioned tables, right? If so, that\n> > > might appear like an inconsistency to the user and we need to document\n> > > the same. Let me summarize the discussion so far in this thread so\n> > > that it is helpful to others.\n> > >\n> >\n> > To avoid that inconsistency, UNSAFE could be the default for partitioned tables\n> > (and we would disallow setting AUTO for these).\n> > So then AUTO is the default for non-partitioned tables only.\n>\n> I think this approach is reasonable, +1.\n>\n\nI see the need to change to default via Alter Table but I am not sure\nif Auto is the most appropriate way to handle that. How about using\nDEFAULT itself as we do in the case of REPLICA IDENTITY? So, if users\nhave to alter parallel safety value to default, they need to just say\nParallel DML DEFAULT. The default would mean automatic behavior for\nnon-partitioned relations and ignore parallelism for partitioned\ntables.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 2 Aug 2021 10:22:32 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Inserts (WAS: [bug?] Missed parallel safety checks..)" }, { "msg_contents": "On Mon, Aug 2, 2021 at 2:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 30, 2021 at 6:53 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Friday, July 30, 2021 2:52 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > > On Fri, Jul 30, 2021 at 4:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > > Besides, I think we need a new default value about parallel dml\n> > > > > safety. Maybe 'auto' or 'null'(different from\n> > > > > safe/restricted/unsafe). Because, user is likely to alter the safety\n> > > > > to the default value to get the automatic safety check, a independent default\n> > > > > value can make it more clear.\n> > > > >\n> > > >\n> > > > Hmm, but auto won't work for partitioned tables, right? If so, that\n> > > > might appear like an inconsistency to the user and we need to document\n> > > > the same. Let me summarize the discussion so far in this thread so\n> > > > that it is helpful to others.\n> > > >\n> > >\n> > > To avoid that inconsistency, UNSAFE could be the default for partitioned tables\n> > > (and we would disallow setting AUTO for these).\n> > > So then AUTO is the default for non-partitioned tables only.\n> >\n> > I think this approach is reasonable, +1.\n> >\n>\n> I see the need to change to default via Alter Table but I am not sure\n> if Auto is the most appropriate way to handle that. How about using\n> DEFAULT itself as we do in the case of REPLICA IDENTITY? So, if users\n> have to alter parallel safety value to default, they need to just say\n> Parallel DML DEFAULT. The default would mean automatic behavior for\n> non-partitioned relations and ignore parallelism for partitioned\n> tables.\n>\n\nHmm, I'm not so sure I'm sold on that.\nI personally think \"DEFAULT\" here is vague, and users then need to\nknow what DEFAULT equates to, based on the type of table (partitioned\nor non-partitioned table).\nAlso, then there's two ways to set the actual \"default\" DML\nparallel-safety for partitioned tables: DEFAULT or UNSAFE.\nAt least \"AUTO\" is a meaningful default option name for\nnon-partitioned tables - \"automatic\" parallel-safety checking, and the\nfact that it isn't the default (and can't be set) for partitioned\ntables highlights the difference in the way being proposed to treat\nthem (i.e. use automatic checking only for non-partitioned tables).\nI'd be interested to hear what others think.\nI think a viable alternative would be to record whether an explicit\nDML parallel-safety has been specified, and if not, apply default\nbehavior (i.e. by default use automatic checking for non-partitioned\ntables and treat partitioned tables as UNSAFE). I'm just not sure\nwhether this kind of distinction (explicit vs implicit default) has\nbeen used before in Postgres options.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Mon, 2 Aug 2021 16:04:18 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Inserts (WAS: [bug?] Missed parallel safety checks..)" }, { "msg_contents": "On August 2, 2021 2:04 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\r\n> On Mon, Aug 2, 2021 at 2:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Fri, Jul 30, 2021 at 6:53 PM houzj.fnst@fujitsu.com\r\n> > <houzj.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > On Friday, July 30, 2021 2:52 PM Greg Nancarrow <gregn4422@gmail.com>\r\n> wrote:\r\n> > > > On Fri, Jul 30, 2021 at 4:02 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > > > >\r\n> > > > > > Besides, I think we need a new default value about parallel\r\n> > > > > > dml safety. Maybe 'auto' or 'null'(different from\r\n> > > > > > safe/restricted/unsafe). Because, user is likely to alter the\r\n> > > > > > safety to the default value to get the automatic safety check,\r\n> > > > > > a independent default value can make it more clear.\r\n> > > > > >\r\n> > > > >\r\n> > > > > Hmm, but auto won't work for partitioned tables, right? If so,\r\n> > > > > that might appear like an inconsistency to the user and we need\r\n> > > > > to document the same. Let me summarize the discussion so far in\r\n> > > > > this thread so that it is helpful to others.\r\n> > > > >\r\n> > > >\r\n> > > > To avoid that inconsistency, UNSAFE could be the default for\r\n> > > > partitioned tables (and we would disallow setting AUTO for these).\r\n> > > > So then AUTO is the default for non-partitioned tables only.\r\n> > >\r\n> > > I think this approach is reasonable, +1.\r\n> > >\r\n> >\r\n> > I see the need to change to default via Alter Table but I am not sure\r\n> > if Auto is the most appropriate way to handle that. How about using\r\n> > DEFAULT itself as we do in the case of REPLICA IDENTITY? So, if users\r\n> > have to alter parallel safety value to default, they need to just say\r\n> > Parallel DML DEFAULT. The default would mean automatic behavior for\r\n> > non-partitioned relations and ignore parallelism for partitioned\r\n> > tables.\r\n> >\r\n> \r\n> Hmm, I'm not so sure I'm sold on that.\r\n> I personally think \"DEFAULT\" here is vague, and users then need to know what\r\n> DEFAULT equates to, based on the type of table (partitioned or non-partitioned\r\n> table).\r\n> Also, then there's two ways to set the actual \"default\" DML parallel-safety for\r\n> partitioned tables: DEFAULT or UNSAFE.\r\n> At least \"AUTO\" is a meaningful default option name for non-partitioned tables\r\n> - \"automatic\" parallel-safety checking, and the fact that it isn't the default (and\r\n> can't be set) for partitioned tables highlights the difference in the way being\r\n> proposed to treat them (i.e. use automatic checking only for non-partitioned\r\n> tables).\r\n> I'd be interested to hear what others think.\r\n> I think a viable alternative would be to record whether an explicit DML\r\n> parallel-safety has been specified, and if not, apply default behavior (i.e. by\r\n> default use automatic checking for non-partitioned tables and treat partitioned\r\n> tables as UNSAFE). I'm just not sure whether this kind of distinction (explicit vs\r\n> implicit default) has been used before in Postgres options.\r\n\r\nI think both approaches are fine, but using \"DEFAULT\" might has a disadvantage\r\nthat if we somehow support automatic safety check for partitioned table in the\r\nfuture, then the meaning of \"DEFAULT\" for partitioned table will change from\r\nUNSAFE to automatic check. It could also bring some burden on the user to\r\nmodify their sql script.\r\n\r\nBest regards,\r\nhouzj\r\n", "msg_date": "Mon, 2 Aug 2021 06:30:21 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Parallel Inserts (WAS: [bug?] Missed parallel safety checks..)" }, { "msg_contents": "Based on the discussion here, I implemented the auto-safety-check feature.\r\nSince most of the technical discussion happened here,I attatched the patches in\r\nthis thread.\r\n\r\nThe patches allow users to specify a parallel-safety option for both\r\npartitioned and non-partitioned relations, and for non-partitioned relations if\r\nusers didn't specify, it would be computed automatically. If the user has\r\nspecified parallel-safety option then we would consider that instead of\r\ncomputing the value by ourselves. But for partitioned table, if users didn't\r\nspecify the parallel dml safety, it will treat is as unsafe.\r\n\r\nFor non-partitioned relations, after computing the parallel-safety of relation\r\nduring the planning, we save it in the relation cache entry and invalidate the\r\ncached parallel-safety for all relations in relcache for a particular database\r\nwhenever any function's parallel-safety is changed.\r\n\r\nTo make it possible for user to alter the safety to a not specified value to\r\nget the automatic safety check, add a new default option(temporarily named\r\n'DEFAULT' in addition to safe/unsafe/restricted) about parallel dml safety.\r\n\r\nTo facilitate users for providing a parallel-safety option, provide a utility\r\nfunctionr \"pg_get_table_parallel_dml_safety(regclass)\" that returns records of\r\n(objid, classid, parallel_safety) for all parallel unsafe/restricted\r\ntable-related objects from which the table's parallel DML safety is determined.\r\nThis will allow user to identify unsafe objects and if the required user can\r\nchange the parallel safety of required functions and then use the parallel\r\nsafety option for the table.\r\n\r\nBest regards,\r\nhouzj", "msg_date": "Tue, 3 Aug 2021 07:40:22 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Parallel Inserts (WAS: [bug?] Missed parallel safety checks..)" }, { "msg_contents": "On Tues, August 3, 2021 3:40 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\r\n> Based on the discussion here, I implemented the auto-safety-check feature.\r\n> Since most of the technical discussion happened here,I attatched the patches in\r\n> this thread.\r\n> \r\n> The patches allow users to specify a parallel-safety option for both partitioned\r\n> and non-partitioned relations, and for non-partitioned relations if users didn't\r\n> specify, it would be computed automatically. If the user has specified\r\n> parallel-safety option then we would consider that instead of computing the\r\n> value by ourselves. But for partitioned table, if users didn't specify the parallel\r\n> dml safety, it will treat is as unsafe.\r\n> \r\n> For non-partitioned relations, after computing the parallel-safety of relation\r\n> during the planning, we save it in the relation cache entry and invalidate the\r\n> cached parallel-safety for all relations in relcache for a particular database\r\n> whenever any function's parallel-safety is changed.\r\n> \r\n> To make it possible for user to alter the safety to a not specified value to get the\r\n> automatic safety check, add a new default option(temporarily named 'DEFAULT'\r\n> in addition to safe/unsafe/restricted) about parallel dml safety.\r\n> \r\n> To facilitate users for providing a parallel-safety option, provide a utility\r\n> functionr \"pg_get_table_parallel_dml_safety(regclass)\" that returns records of\r\n> (objid, classid, parallel_safety) for all parallel unsafe/restricted table-related\r\n> objects from which the table's parallel DML safety is determined.\r\n> This will allow user to identify unsafe objects and if the required user can change\r\n> the parallel safety of required functions and then use the parallel safety option\r\n> for the table.\r\n\r\nUpdate the commit message in patches to make it easier for others to review.\r\n\r\nBest regards,\r\nHouzj", "msg_date": "Fri, 6 Aug 2021 08:23:09 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Parallel Inserts (WAS: [bug?] Missed parallel safety checks..)" }, { "msg_contents": "On Fri, Aug 6, 2021 4:23 PM Hou zhijie <houzj.fnst@fujitsu.com> wrote:\r\n> \r\n> Update the commit message in patches to make it easier for others to review.\r\n\r\nCFbot reported a compile error due to recent commit 3aafc03.\r\nAttach rebased patches which fix the error.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Thu, 19 Aug 2021 08:16:11 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Parallel Inserts (WAS: [bug?] Missed parallel safety checks..)" }, { "msg_contents": "Thursday, August 19, 2021 4:16 PM Hou zhijie <houzj.fnst@fujitsu.com> wrote:\r\n> On Fri, Aug 6, 2021 4:23 PM Hou zhijie <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Update the commit message in patches to make it easier for others to review.\r\n> \r\n> CFbot reported a compile error due to recent commit 3aafc03.\r\n> Attach rebased patches which fix the error.\r\n\r\nThe patch can't apply to the HEAD branch due a recent commit.\r\nAttach rebased patches.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Wed, 1 Sep 2021 09:23:48 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Parallel Inserts (WAS: [bug?] Missed parallel safety checks..)" }, { "msg_contents": "From: Wednesday, September 1, 2021 5:24 PM Hou Zhijie<houzj.fnst@fujitsu.com>\r\n> Thursday, August 19, 2021 4:16 PM Hou zhijie <houzj.fnst@fujitsu.com> wrote:\r\n> > On Fri, Aug 6, 2021 4:23 PM Hou zhijie <houzj.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > Update the commit message in patches to make it easier for others to\r\n> review.\r\n> >\r\n> > CFbot reported a compile error due to recent commit 3aafc03.\r\n> > Attach rebased patches which fix the error.\r\n> \r\n> The patch can't apply to the HEAD branch due a recent commit.\r\n> Attach rebased patches.\r\n\r\nIn the past, the rewriter could generate a re-written query with a modifying\r\nCTE does not have hasModifyingCTE flag set and this bug cause the regression\r\ntest(force_parallel_mode=regress) failure when enable parallel select for\r\ninsert, so , we had a workaround 0006.patch for it. But now, the bug has been\r\nfixed in commit 362e2d and we don't need the workaround patch anymore.\r\n\r\nAttach new version patch set which remove the workaround patch.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Thu, 9 Sep 2021 02:12:08 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Parallel Inserts (WAS: [bug?] Missed parallel safety checks..)" }, { "msg_contents": "Hi,\n\nOn Thu, Sep 09, 2021 at 02:12:08AM +0000, houzj.fnst@fujitsu.com wrote:\n> \n> Attach new version patch set which remove the workaround patch.\n\nThis version of the patchset doesn't apply anymore:\n\nhttp://cfbot.cputube.org/patch_36_3143.log\n=== Applying patches on top of PostgreSQL commit ID a18b6d2dc288dfa6e7905ede1d4462edd6a8af47 ===\n=== applying patch ./v19-0001-CREATE-ALTER-TABLE-PARALLEL-DML.patch\n[...]\npatching file src/backend/commands/tablecmds.c\nHunk #1 FAILED at 40.\nHunk #2 succeeded at 624 (offset 21 lines).\nHunk #3 succeeded at 670 (offset 21 lines).\nHunk #4 succeeded at 947 (offset 19 lines).\nHunk #5 succeeded at 991 (offset 19 lines).\nHunk #6 succeeded at 4256 (offset 40 lines).\nHunk #7 succeeded at 4807 (offset 40 lines).\nHunk #8 succeeded at 5217 (offset 40 lines).\nHunk #9 succeeded at 6193 (offset 42 lines).\nHunk #10 succeeded at 19278 (offset 465 lines).\n1 out of 10 hunks FAILED -- saving rejects to file src/backend/commands/tablecmds.c.rej\n[...]\npatching file src/bin/pg_dump/pg_dump.c\nHunk #1 FAILED at 6253.\nHunk #2 FAILED at 6358.\nHunk #3 FAILED at 6450.\nHunk #4 FAILED at 6503.\nHunk #5 FAILED at 6556.\nHunk #6 FAILED at 6609.\nHunk #7 FAILED at 6660.\nHunk #8 FAILED at 6708.\nHunk #9 FAILED at 6756.\nHunk #10 FAILED at 6803.\nHunk #11 FAILED at 6872.\nHunk #12 FAILED at 6927.\nHunk #13 succeeded at 15524 (offset -1031 lines).\n12 out of 13 hunks FAILED -- saving rejects to file src/bin/pg_dump/pg_dump.c.rej\n[...]\npatching file src/bin/psql/describe.c\nHunk #1 succeeded at 1479 (offset -177 lines).\nHunk #2 succeeded at 1493 (offset -177 lines).\nHunk #3 succeeded at 1631 (offset -241 lines).\nHunk #4 succeeded at 3374 (offset -277 lines).\nHunk #5 succeeded at 3731 (offset -310 lines).\nHunk #6 FAILED at 4109.\n1 out of 6 hunks FAILED -- saving rejects to file src/bin/psql/describe.c.rej\n\nCould you send a rebased version? In the meantime I will switch the entry to\nWaiting on Author.\n\n\n", "msg_date": "Fri, 14 Jan 2022 20:14:02 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Inserts (WAS: [bug?] Missed parallel safety checks..)" }, { "msg_contents": "On Thu, Jul 28, 2022 at 8:43 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> Could you send a rebased version? In the meantime I will switch the entry to\n> Waiting on Author.\n\nBy request in [1] I'm marking this Returned with Feedback for now.\nWhenever you're ready, you can resurrect the patch entry by visiting\n\n https://commitfest.postgresql.org/38/3143/\n\nand changing the status to \"Needs Review\", and then changing the\nstatus again to \"Move to next CF\". (Don't forget the second step;\nhopefully we will have streamlined this in the near future!)\n\nThanks,\n--Jacob\n\n[1] https://www.postgresql.org/message-id/OS0PR01MB571696D623F35A09AB51903A94969%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\n\n", "msg_date": "Thu, 28 Jul 2022 08:51:41 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Inserts (WAS: [bug?] Missed parallel safety checks..)" } ]
[ { "msg_contents": "Hi,\n\nThis is a small patch (against master) to allow an application using \nlibpq with GSSAPI authentication to specify where to fetch the \ncredential cache from -- it effectively consists of a new field in \nPQconninfoOptions to store this data and (where the user has specified a \nccache location) a call into the gss_krb5_ccache_name function in the \nGSSAPI library.\n\nIt's my first go at submitting a patch -- it works as far as I can tell, \nbut I suspect there will probably still be stuff to fix before it's \nready to use!\n\nAs far as I'm concerned this is working (the code compiles successfully \nfollowing \"./configure --with-gssapi --enable-cassert\", and seems to \nwork for specifying the ccache location without any noticeable errors).\n\nI hope there shouldn't be anything platform-specific here (I've been \nworking on Ubuntu Linux but the only interactions with external \napplications are via the GSSAPI library, which was already in use).\n\nThe dispsize value for ccache_name is 64 in this code (which seems to be \nwhat's used with other file-path-like parameters in the existing code) \nbut I'm happy to have this corrected if it needs a different value -- as \nfar as I can tell this is just for display purposes rather than anything \ncritical in terms of actually storing the value?\n\nIf no ccache_name is specified in the connection string then it defaults \nto NULL, which means the gss_krb5_ccache_name call is not made and the \ncurrent behaviour (of letting the GSSAPI library work out the location \nof the ccache) is not changed.\n\nMany thanks,\nDaniel", "msg_date": "Tue, 20 Apr 2021 10:37:18 +0100", "msg_from": "Daniel Carter <danielchriscarter+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "PATCH: Add GSSAPI ccache_name option to libpq" }, { "msg_contents": "Hi Daniel,\n\n> It's my first go at submitting a patch -- it works as far as I can tell,\n> but I suspect there will probably still be stuff to fix before it's\n> ready to use!\n\nYou are doing great :)\n\n> As far as I'm concerned this is working (the code compiles successfully\n> following \"./configure --with-gssapi --enable-cassert\", and seems to\n> work for specifying the ccache location without any noticeable errors).\n\nThere are several other things worth checking:\n0. Always run `make distclean` before following steps\n1. Make sure `make -j4 world && make -j4 check-world` passes\n2. Make sure `make install-world` and `make installcheck-world` passes\n3. Since you are changing the documentation it's worth checking that\nit displays properly. The documentation is in the\n$(PGINSTALL)/share/doc/postgresql/html directory\n\nSeveral years ago I published some scripts that simplify all this a\nlittle: https://github.com/afiskon/pgscripts, especially step 3. They\nmay require some modifications for your OS of choice. Please read\nhttps://wiki.postgresql.org/wiki/Submitting_a_Patch for more\ninformation.\n\nGenerally speaking, it also a good idea to add some test cases for\nyour code, although I understand why it might be a little complicated\nin this particular case. Maybe you could at least tell us how it can\nbe checked manually that this code actually does what is supposed to?\n\nOn Tue, Apr 20, 2021 at 12:37 PM Daniel Carter\n<danielchriscarter+postgres@gmail.com> wrote:\n>\n> Hi,\n>\n> This is a small patch (against master) to allow an application using\n> libpq with GSSAPI authentication to specify where to fetch the\n> credential cache from -- it effectively consists of a new field in\n> PQconninfoOptions to store this data and (where the user has specified a\n> ccache location) a call into the gss_krb5_ccache_name function in the\n> GSSAPI library.\n>\n> It's my first go at submitting a patch -- it works as far as I can tell,\n> but I suspect there will probably still be stuff to fix before it's\n> ready to use!\n>\n> As far as I'm concerned this is working (the code compiles successfully\n> following \"./configure --with-gssapi --enable-cassert\", and seems to\n> work for specifying the ccache location without any noticeable errors).\n>\n> I hope there shouldn't be anything platform-specific here (I've been\n> working on Ubuntu Linux but the only interactions with external\n> applications are via the GSSAPI library, which was already in use).\n>\n> The dispsize value for ccache_name is 64 in this code (which seems to be\n> what's used with other file-path-like parameters in the existing code)\n> but I'm happy to have this corrected if it needs a different value -- as\n> far as I can tell this is just for display purposes rather than anything\n> critical in terms of actually storing the value?\n>\n> If no ccache_name is specified in the connection string then it defaults\n> to NULL, which means the gss_krb5_ccache_name call is not made and the\n> current behaviour (of letting the GSSAPI library work out the location\n> of the ccache) is not changed.\n>\n> Many thanks,\n> Daniel\n>\n\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 20 Apr 2021 13:30:52 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add GSSAPI ccache_name option to libpq" }, { "msg_contents": "Hi\n\nOn Tue, Apr 20, 2021 at 10:37 AM Daniel Carter <\ndanielchriscarter+postgres@gmail.com> wrote:\n\n> Hi,\n>\n> This is a small patch (against master) to allow an application using\n> libpq with GSSAPI authentication to specify where to fetch the\n> credential cache from -- it effectively consists of a new field in\n> PQconninfoOptions to store this data and (where the user has specified a\n> ccache location) a call into the gss_krb5_ccache_name function in the\n> GSSAPI library.\n>\n\nThe pgAdmin team would love to have this feature. It would greatly simplify\nmanagement of multiple connections from different users.\n\n\n>\n> It's my first go at submitting a patch -- it works as far as I can tell,\n> but I suspect there will probably still be stuff to fix before it's\n> ready to use!\n>\n> As far as I'm concerned this is working (the code compiles successfully\n> following \"./configure --with-gssapi --enable-cassert\", and seems to\n> work for specifying the ccache location without any noticeable errors).\n>\n> I hope there shouldn't be anything platform-specific here (I've been\n> working on Ubuntu Linux but the only interactions with external\n> applications are via the GSSAPI library, which was already in use).\n>\n> The dispsize value for ccache_name is 64 in this code (which seems to be\n> what's used with other file-path-like parameters in the existing code)\n> but I'm happy to have this corrected if it needs a different value -- as\n> far as I can tell this is just for display purposes rather than anything\n> critical in terms of actually storing the value?\n>\n> If no ccache_name is specified in the connection string then it defaults\n> to NULL, which means the gss_krb5_ccache_name call is not made and the\n> current behaviour (of letting the GSSAPI library work out the location\n> of the ccache) is not changed.\n>\n> Many thanks,\n> Daniel\n>\n>\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nHiOn Tue, Apr 20, 2021 at 10:37 AM Daniel Carter <danielchriscarter+postgres@gmail.com> wrote:Hi,\n\nThis is a small patch (against master) to allow an application using \nlibpq with GSSAPI authentication to specify where to fetch the \ncredential cache from -- it effectively consists of a new field in \nPQconninfoOptions to store this data and (where the user has specified a \nccache location) a call into the gss_krb5_ccache_name function in the \nGSSAPI library.The pgAdmin team would love to have this feature. It would greatly simplify management of multiple connections from different users. \n\nIt's my first go at submitting a patch -- it works as far as I can tell, \nbut I suspect there will probably still be stuff to fix before it's \nready to use!\n\nAs far as I'm concerned this is working (the code compiles successfully \nfollowing \"./configure --with-gssapi --enable-cassert\", and seems to \nwork for specifying the ccache location without any noticeable errors).\n\nI hope there shouldn't be anything platform-specific here (I've been \nworking on Ubuntu Linux but the only interactions with external \napplications are via the GSSAPI library, which was already in use).\n\nThe dispsize value for ccache_name is 64 in this code (which seems to be \nwhat's used with other file-path-like parameters in the existing code) \nbut I'm happy to have this corrected if it needs a different value -- as \nfar as I can tell this is just for display purposes rather than anything \ncritical in terms of actually storing the value?\n\nIf no ccache_name is specified in the connection string then it defaults \nto NULL, which means the gss_krb5_ccache_name call is not made and the \ncurrent behaviour (of letting the GSSAPI library work out the location \nof the ccache) is not changed.\n\nMany thanks,\nDaniel\n\n-- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com", "msg_date": "Tue, 20 Apr 2021 11:41:42 +0100", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add GSSAPI ccache_name option to libpq" }, { "msg_contents": "Hi Aleksander,\n\nOn 20/04/2021 11:30, Aleksander Alekseev wrote:\n> Hi Daniel,\n> \n>> It's my first go at submitting a patch -- it works as far as I can tell,\n>> but I suspect there will probably still be stuff to fix before it's\n>> ready to use!\n> \n> You are doing great :)\n\nThanks for the encouragement!\n\n> There are several other things worth checking:\n> 0. Always run `make distclean` before following steps\n> 1. Make sure `make -j4 world && make -j4 check-world` passes\n> 2. Make sure `make install-world` and `make installcheck-world` passes\n> 3. Since you are changing the documentation it's worth checking that\n> it displays properly. The documentation is in the\n> $(PGINSTALL)/share/doc/postgresql/html directory\n> \n> Several years ago I published some scripts that simplify all this a\n> little: https://github.com/afiskon/pgscripts, especially step 3. They\n> may require some modifications for your OS of choice. Please read\n> https://wiki.postgresql.org/wiki/Submitting_a_Patch for more\n> information.\n\nThanks for the advice (and the script repository).\n\nOne thing this has identified is an implicit declaration error on the \ngss_krb5_ccache_name call (the code was still working so I presume it \nmust get included at some point, although I can't see exactly where).\n\nThis can be fixed easily enough just by adding a `#include \n<gssapi/gssapi_krb5.h>` line to libpq-int.h, although I don't know \nwhether this wants to be treated differently because (as far as I can \ntell) it's a Kerberos-specific feature rather than something which any \nGSSAPI service could use (hence it being in gssapi_krb5.h rather than \ngssapi.h) and so might end up breaking other things?\n\n(It looks like current versions of both MIT Kerberos and Heimdal use \n<gssapi/gssapi.h> rather than <gssapi.h>, although Heimdal previously \nhad all its GSSAPI functionality, including this gss_krb5_ccache_name \nfunction, in <gssapi.h>.)\n\n> Generally speaking, it also a good idea to add some test cases for\n> your code, although I understand why it might be a little complicated\n> in this particular case. Maybe you could at least tell us how it can\n> be checked manually that this code actually does what is supposed to?\n\nSomething like the following code hopefully demonstrates how it's \nsupposed to work:\n\n> const char *conninfo = \"dbname='test' user='test' host='krb.local' port='5432' ccache_name='/home/user/test/krb5cc_1000'\";\n> PGconn *conn;\n> \n> conn = PQconnectdb(conninfo);\n> \n> if(PQstatus(conn) != CONNECTION_OK) {\n> fprintf(stderr, \"Connection to database failed: %s\\n\", PQerrorMessage(conn));\n> } else {\n> printf(\"Connection succeeded\\n\");\n> }\n> PQfinish(conn);\n\nHopefully this example gives some sort of guide to its intended purpose \n-- the ccache_name parameter in the connection string specifies a \n(non-standard) location for the credential cache, which is then used by \nlibpq to fetch data from the database via GSSAPI authentication.\n\nMany thanks,\nDaniel\n\n\n", "msg_date": "Tue, 20 Apr 2021 17:28:46 +0100", "msg_from": "Daniel Carter <danielchriscarter+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Add GSSAPI ccache_name option to libpq" }, { "msg_contents": "Greetings,\n\n* Daniel Carter (danielchriscarter+postgres@gmail.com) wrote:\n> This is a small patch (against master) to allow an application using libpq\n> with GSSAPI authentication to specify where to fetch the credential cache\n> from -- it effectively consists of a new field in PQconninfoOptions to store\n> this data and (where the user has specified a ccache location) a call into\n> the gss_krb5_ccache_name function in the GSSAPI library.\n\nI'm not necessarily against this, but typically the GSSAPI library\nprovides a way for you to control this using, eg, the KRB5_CCACHE\nenvironment variable. Is there some reason why that couldn't be used..?\n\nThanks,\n\nStephen", "msg_date": "Tue, 20 Apr 2021 15:01:04 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add GSSAPI ccache_name option to libpq" }, { "msg_contents": "Hi Stephen,\n\nOn 20/04/2021 20:01, Stephen Frost wrote:\n> I'm not necessarily against this, but typically the GSSAPI library\n> provides a way for you to control this using, eg, the KRB5_CCACHE\n> environment variable. Is there some reason why that couldn't be used..?\n\nThe original motivation for investigating this was setting up a web app \nwhich could authenticate to a database server using a Kerberos ticket. \nSince the web framework already needs to create a connection string \n(with database name etc.) to set up the database connection, having an \noption here for the ccache location makes it much more straightforward \nto specify than having to save data out to environment variables (and \nmakes things cleaner if there are potentially multiple database \nconnections going on at once in different processes).\n\nThere may well be a better way of going about this -- it's just that I \ncan't currently see an obvious way to get this kind of setup working \nusing only the environment variable.\n\nMany thanks,\nDaniel\n\n\n", "msg_date": "Tue, 20 Apr 2021 20:44:23 +0100", "msg_from": "Daniel Carter <danielchriscarter+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Add GSSAPI ccache_name option to libpq" }, { "msg_contents": "On Tue, Apr 20, 2021 at 08:44:23PM +0100, Daniel Carter wrote:\n> The original motivation for investigating this was setting up a web app\n> which could authenticate to a database server using a Kerberos ticket. Since\n> the web framework already needs to create a connection string (with database\n> name etc.) to set up the database connection, having an option here for the\n> ccache location makes it much more straightforward to specify than having to\n> save data out to environment variables (and makes things cleaner if there\n> are potentially multiple database connections going on at once in different\n> processes).\n> \n> There may well be a better way of going about this -- it's just that I can't\n> currently see an obvious way to get this kind of setup working using only\n> the environment variable.\n\nThe environment variable bit sounds like a fair argument to me.\n\nPlease do not forget to add this patch and thread to the next commit\nfest:\nhttps://commitfest.postgresql.org/33/\nYou need a community account, and that's unfortunately too late for\nPostgres 14, but the development of 15 will begin at the beginning of\nJuly so it could be included there.\n--\nMichael", "msg_date": "Wed, 21 Apr 2021 12:27:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add GSSAPI ccache_name option to libpq" }, { "msg_contents": "Hi\n\nOn Tue, Apr 20, 2021 at 8:44 PM Daniel Carter <\ndanielchriscarter+postgres@gmail.com> wrote:\n\n> Hi Stephen,\n>\n> On 20/04/2021 20:01, Stephen Frost wrote:\n> > I'm not necessarily against this, but typically the GSSAPI library\n> > provides a way for you to control this using, eg, the KRB5_CCACHE\n> > environment variable. Is there some reason why that couldn't be used..?\n>\n> The original motivation for investigating this was setting up a web app\n> which could authenticate to a database server using a Kerberos ticket.\n> Since the web framework already needs to create a connection string\n> (with database name etc.) to set up the database connection, having an\n> option here for the ccache location makes it much more straightforward\n> to specify than having to save data out to environment variables (and\n> makes things cleaner if there are potentially multiple database\n> connections going on at once in different processes).\n>\n\nYes, that's why we'd like it for pgAdmin. When dealing with a\nmulti-threaded application it becomes a pain keeping credentials for\ndifferent users separated; a lot more mucking about with mutexes etc. If we\ncould specify the credential cache location in the connection string, it\nwould be much easier (and likely more performant) to securely keep\nindividual caches for each user.\n\n\n>\n> There may well be a better way of going about this -- it's just that I\n> can't currently see an obvious way to get this kind of setup working\n> using only the environment variable.\n>\n> Many thanks,\n> Daniel\n>\n>\n>\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nHiOn Tue, Apr 20, 2021 at 8:44 PM Daniel Carter <danielchriscarter+postgres@gmail.com> wrote:Hi Stephen,\n\nOn 20/04/2021 20:01, Stephen Frost wrote:\n> I'm not necessarily against this, but typically the GSSAPI library\n> provides a way for you to control this using, eg, the KRB5_CCACHE\n> environment variable.  Is there some reason why that couldn't be used..?\n\nThe original motivation for investigating this was setting up a web app \nwhich could authenticate to a database server using a Kerberos ticket. \nSince the web framework already needs to create a connection string \n(with database name etc.) to set up the database connection, having an \noption here for the ccache location makes it much more straightforward \nto specify than having to save data out to environment variables (and \nmakes things cleaner if there are potentially multiple database \nconnections going on at once in different processes).Yes, that's why we'd like it for pgAdmin. When dealing with a multi-threaded application it becomes a pain keeping credentials for different users separated; a lot more mucking about with mutexes etc. If we could specify the credential cache location in the connection string, it would be much easier (and likely more performant) to securely keep individual caches for each user. \n\nThere may well be a better way of going about this -- it's just that I \ncan't currently see an obvious way to get this kind of setup working \nusing only the environment variable.\n\nMany thanks,\nDaniel\n\n\n-- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com", "msg_date": "Wed, 21 Apr 2021 09:15:14 +0100", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add GSSAPI ccache_name option to libpq" }, { "msg_contents": "On 2021-Apr-20, Daniel Carter wrote:\n\n> +#ifdef ENABLE_GSS\n> +\t{\"ccache_name\", NULL, NULL, NULL,\n> +\t\t\"Credential-cache-name\", \"\", 64,\n> +\toffsetof(struct pg_conn, ccache_name)},\n> +#endif\n\nI think it would be better that this option name includes \"gss\"\nsomewhere, and perhaps even avoid the shorthand \"ccache\" altogether.\nSee commit 5599f40d259a.\n\nThanks\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Wed, 21 Apr 2021 10:39:54 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add GSSAPI ccache_name option to libpq" }, { "msg_contents": "Greetings,\n\n* Daniel Carter (danielchriscarter+postgres@gmail.com) wrote:\n> On 20/04/2021 20:01, Stephen Frost wrote:\n> >I'm not necessarily against this, but typically the GSSAPI library\n> >provides a way for you to control this using, eg, the KRB5_CCACHE\n> >environment variable. Is there some reason why that couldn't be used..?\n> \n> The original motivation for investigating this was setting up a web app\n> which could authenticate to a database server using a Kerberos ticket. Since\n> the web framework already needs to create a connection string (with database\n> name etc.) to set up the database connection, having an option here for the\n> ccache location makes it much more straightforward to specify than having to\n> save data out to environment variables (and makes things cleaner if there\n> are potentially multiple database connections going on at once in different\n> processes).\n\nThis is certainly nothing new and the webserver modules supporting this,\nlike apache's mod_auth_kerb and mod_auth_gssapi, automatically handle\nsetting the env variables (along with lots of other ones which web apps\nhave been using for a very long time), so I have to admit that I'm a bit\nwary of the argument that this is somehow needed for web-based\napplications.\n\nI surely hope that the intent here is to use Negotiate / SPNEGO to\nauthenticate the user who is connecting to the webserver and then have\ncredentials delegated (ideally through constrained credential\ndelegation..) to the web server by the user for the web application to\nuse to connect to the PG server.\n\nI certainly don't think we should be targetting a solution where the\napplication is acquiring credentials from the KDC directly using a\nuser's username/password, that's very strongly discouraged for the very\ngood reason that it means the user's password is being passed around.\n\n> There may well be a better way of going about this -- it's just that I can't\n> currently see an obvious way to get this kind of setup working using only\n> the environment variable.\n\nPerhaps you could provide a bit more information about what you're\nspecifically doing here? Again, with something like apache's\nmod_auth_gssapi, it's a matter of just installing that module and then\nthe user will be authenticated by the web server itself, including\nmanaging of delegated credentials, setting of the environment variables,\nand the web application shouldn't have to do anything but use libpq to\nrequest a connection and if PG's configured with gssapi auth, it'll all\n'just work'. Only thing I can think of offhand is that you might have\nto take AUTH_USER and pass that to libpq as the user's username to\nconnect with and maybe get from the user what database to request the\nconnection to..\n\nThanks,\n\nStephen", "msg_date": "Wed, 21 Apr 2021 13:40:35 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add GSSAPI ccache_name option to libpq" }, { "msg_contents": "Hi Stephen,\n\nOn 21/04/2021 18:40, Stephen Frost wrote:\n> I surely hope that the intent here is to use Negotiate / SPNEGO to\n> authenticate the user who is connecting to the webserver and then have\n> credentials delegated (ideally through constrained credential\n> delegation..) to the web server by the user for the web application to\n> use to connect to the PG server.\n> \n> I certainly don't think we should be targetting a solution where the\n> application is acquiring credentials from the KDC directly using a\n> user's username/password, that's very strongly discouraged for the very\n> good reason that it means the user's password is being passed around.\n\nIndeed -- that's certainly not the intended aim of this patch!\n\n>> There may well be a better way of going about this -- it's just that I can't\n>> currently see an obvious way to get this kind of setup working using only\n>> the environment variable.\n> \n> Perhaps you could provide a bit more information about what you're\n> specifically doing here? Again, with something like apache's\n> mod_auth_gssapi, it's a matter of just installing that module and then\n> the user will be authenticated by the web server itself, including\n> managing of delegated credentials, setting of the environment variables,\n> and the web application shouldn't have to do anything but use libpq to\n> request a connection and if PG's configured with gssapi auth, it'll all\n> 'just work'. Only thing I can think of offhand is that you might have\n> to take AUTH_USER and pass that to libpq as the user's username to\n> connect with and maybe get from the user what database to request the\n> connection to..\n\nHmm, yes -- something like that is definitely a neater way of doing \nthings in the web app scenario (I'd been working on the principle that \nthe username and credential cache were \"provided\" from the same place, \ni.e. the web app, but as you point out that's not actually necessary).\n\nHowever, it seems like there might be some interest in this for other \nscenarios (e.g. with relation to multi-threaded applications where more \nprecise control of which thread uses which credential cache is useful), \nso possibly this may still be worth continuing with even if it has a \nslightly different intended purpose to what was originally planned?\n\nMany thanks,\nDaniel\n\n\n", "msg_date": "Thu, 22 Apr 2021 01:05:53 +0100", "msg_from": "Daniel Carter <danielchriscarter+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Add GSSAPI ccache_name option to libpq" }, { "msg_contents": "Greetings,\n\n* Daniel Carter (danielchriscarter+postgres@gmail.com) wrote:\n> On 21/04/2021 18:40, Stephen Frost wrote:\n> >I surely hope that the intent here is to use Negotiate / SPNEGO to\n> >authenticate the user who is connecting to the webserver and then have\n> >credentials delegated (ideally through constrained credential\n> >delegation..) to the web server by the user for the web application to\n> >use to connect to the PG server.\n> >\n> >I certainly don't think we should be targetting a solution where the\n> >application is acquiring credentials from the KDC directly using a\n> >user's username/password, that's very strongly discouraged for the very\n> >good reason that it means the user's password is being passed around.\n> \n> Indeed -- that's certainly not the intended aim of this patch!\n\nGlad to hear that. :)\n\n> >>There may well be a better way of going about this -- it's just that I can't\n> >>currently see an obvious way to get this kind of setup working using only\n> >>the environment variable.\n> >\n> >Perhaps you could provide a bit more information about what you're\n> >specifically doing here? Again, with something like apache's\n> >mod_auth_gssapi, it's a matter of just installing that module and then\n> >the user will be authenticated by the web server itself, including\n> >managing of delegated credentials, setting of the environment variables,\n> >and the web application shouldn't have to do anything but use libpq to\n> >request a connection and if PG's configured with gssapi auth, it'll all\n> >'just work'. Only thing I can think of offhand is that you might have\n> >to take AUTH_USER and pass that to libpq as the user's username to\n> >connect with and maybe get from the user what database to request the\n> >connection to..\n> \n> Hmm, yes -- something like that is definitely a neater way of doing things\n> in the web app scenario (I'd been working on the principle that the username\n> and credential cache were \"provided\" from the same place, i.e. the web app,\n> but as you point out that's not actually necessary).\n\nYeah, that's really how web apps should be doing this.\n\n> However, it seems like there might be some interest in this for other\n> scenarios (e.g. with relation to multi-threaded applications where more\n> precise control of which thread uses which credential cache is useful), so\n> possibly this may still be worth continuing with even if it has a slightly\n> different intended purpose to what was originally planned?\n\nI'd want to hear the actual use-case rather than just hand-waving that\n\"oh, this might be useful for this threaded app that might exist some\nday\"...\n\nThanks,\n\nStephen", "msg_date": "Wed, 21 Apr 2021 20:55:29 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add GSSAPI ccache_name option to libpq" }, { "msg_contents": "On Thu, Apr 22, 2021 at 1:55 AM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * Daniel Carter (danielchriscarter+postgres@gmail.com) wrote:\n> > On 21/04/2021 18:40, Stephen Frost wrote:\n> > >I surely hope that the intent here is to use Negotiate / SPNEGO to\n> > >authenticate the user who is connecting to the webserver and then have\n> > >credentials delegated (ideally through constrained credential\n> > >delegation..) to the web server by the user for the web application to\n> > >use to connect to the PG server.\n> > >\n> > >I certainly don't think we should be targetting a solution where the\n> > >application is acquiring credentials from the KDC directly using a\n> > >user's username/password, that's very strongly discouraged for the very\n> > >good reason that it means the user's password is being passed around.\n> >\n> > Indeed -- that's certainly not the intended aim of this patch!\n>\n> Glad to hear that. :)\n>\n> > >>There may well be a better way of going about this -- it's just that I\n> can't\n> > >>currently see an obvious way to get this kind of setup working using\n> only\n> > >>the environment variable.\n> > >\n> > >Perhaps you could provide a bit more information about what you're\n> > >specifically doing here? Again, with something like apache's\n> > >mod_auth_gssapi, it's a matter of just installing that module and then\n> > >the user will be authenticated by the web server itself, including\n> > >managing of delegated credentials, setting of the environment variables,\n> > >and the web application shouldn't have to do anything but use libpq to\n> > >request a connection and if PG's configured with gssapi auth, it'll all\n> > >'just work'. Only thing I can think of offhand is that you might have\n> > >to take AUTH_USER and pass that to libpq as the user's username to\n> > >connect with and maybe get from the user what database to request the\n> > >connection to..\n> >\n> > Hmm, yes -- something like that is definitely a neater way of doing\n> things\n> > in the web app scenario (I'd been working on the principle that the\n> username\n> > and credential cache were \"provided\" from the same place, i.e. the web\n> app,\n> > but as you point out that's not actually necessary).\n>\n> Yeah, that's really how web apps should be doing this.\n>\n> > However, it seems like there might be some interest in this for other\n> > scenarios (e.g. with relation to multi-threaded applications where more\n> > precise control of which thread uses which credential cache is useful),\n> so\n> > possibly this may still be worth continuing with even if it has a\n> slightly\n> > different intended purpose to what was originally planned?\n>\n> I'd want to hear the actual use-case rather than just hand-waving that\n> \"oh, this might be useful for this threaded app that might exist some\n> day\"...\n>\n\nI thought I gave that precise use case upthread. As you know, we've been\nadding Kerberos support to pgAdmin. When running in server mode, we have\nmultiple users logging into a single instance of the application, and we\nneed to cache credentials for them to be used to login to the PostgreSQL\nservers, using libpq that is on the pgAdmin server. For obvious reasons, we\nwant to use separate credential caches for each pgAdmin user, and currently\nthat means having a mutex around every use of the caches, so we can be sure\nwe're safely manipulating the environment, using the correct cache, and\nthen continuing as normal once we're done.\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nOn Thu, Apr 22, 2021 at 1:55 AM Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* Daniel Carter (danielchriscarter+postgres@gmail.com) wrote:\n> On 21/04/2021 18:40, Stephen Frost wrote:\n> >I surely hope that the intent here is to use Negotiate / SPNEGO to\n> >authenticate the user who is connecting to the webserver and then have\n> >credentials delegated (ideally through constrained credential\n> >delegation..) to the web server by the user for the web application to\n> >use to connect to the PG server.\n> >\n> >I certainly don't think we should be targetting a solution where the\n> >application is acquiring credentials from the KDC directly using a\n> >user's username/password, that's very strongly discouraged for the very\n> >good reason that it means the user's password is being passed around.\n> \n> Indeed -- that's certainly not the intended aim of this patch!\n\nGlad to hear that. :)\n\n> >>There may well be a better way of going about this -- it's just that I can't\n> >>currently see an obvious way to get this kind of setup working using only\n> >>the environment variable.\n> >\n> >Perhaps you could provide a bit more information about what you're\n> >specifically doing here?  Again, with something like apache's\n> >mod_auth_gssapi, it's a matter of just installing that module and then\n> >the user will be authenticated by the web server itself, including\n> >managing of delegated credentials, setting of the environment variables,\n> >and the web application shouldn't have to do anything but use libpq to\n> >request a connection and if PG's configured with gssapi auth, it'll all\n> >'just work'.  Only thing I can think of offhand is that you might have\n> >to take AUTH_USER and pass that to libpq as the user's username to\n> >connect with and maybe get from the user what database to request the\n> >connection to..\n> \n> Hmm, yes -- something like that is definitely a neater way of doing things\n> in the web app scenario (I'd been working on the principle that the username\n> and credential cache were \"provided\" from the same place, i.e. the web app,\n> but as you point out that's not actually necessary).\n\nYeah, that's really how web apps should be doing this.\n\n> However, it seems like there might be some interest in this for other\n> scenarios (e.g. with relation to multi-threaded applications where more\n> precise control of which thread uses which credential cache is useful), so\n> possibly this may still be worth continuing with even if it has a slightly\n> different intended purpose to what was originally planned?\n\nI'd want to hear the actual use-case rather than just hand-waving that\n\"oh, this might be useful for this threaded app that might exist some\nday\"...I thought I gave that precise use case upthread. As you know, we've been adding Kerberos support to pgAdmin. When running in server mode, we have multiple users logging into a single instance of the application, and we need to cache credentials for them to be used to login to the PostgreSQL servers, using libpq that is on the pgAdmin server. For obvious reasons, we want to use separate credential caches for each pgAdmin user, and currently that means having a mutex around every use of the caches, so we can be sure we're safely manipulating the environment, using the correct cache, and then continuing as normal once we're done. -- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com", "msg_date": "Thu, 22 Apr 2021 09:10:02 +0100", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add GSSAPI ccache_name option to libpq" } ]
[ { "msg_contents": "Hi,\n\nI noticed that the pg_stat_statements documentation doesn't include\nthe necessary config parameter setting \"compute_query_id = on\" in the\n\"typical usage\" (so if you just used those existing typical usage\nsettings, the tracking wouldn't actually work).\nI've attached a patch for this.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia", "msg_date": "Tue, 20 Apr 2021 19:49:34 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": true, "msg_subject": "Tiny update to pg_stat_statements documentation" }, { "msg_contents": "On Tue, Apr 20, 2021 at 3:19 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> Hi,\n>\n> I noticed that the pg_stat_statements documentation doesn't include\n> the necessary config parameter setting \"compute_query_id = on\" in the\n> \"typical usage\" (so if you just used those existing typical usage\n> settings, the tracking wouldn't actually work).\n> I've attached a patch for this.\n\n+1. How about mentioning something like below?\n\n+compute_query_id = on # when in-core query identifier computation is\ndesired, otherwise off.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 20 Apr 2021 15:36:17 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Tiny update to pg_stat_statements documentation" }, { "msg_contents": "On Tue, Apr 20, 2021 at 8:06 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> > I've attached a patch for this.\n>\n> +1. How about mentioning something like below?\n>\n> +compute_query_id = on # when in-core query identifier computation is\n> desired, otherwise off.\n>\n\nHmm, I think that comment is perhaps slightly misleading, as\ncompute_query_id wouldn't be set to \"off\" in settings for \"typical\nusage\".\nJust saying \"use in-core query identifier computation\" would be a\nbetter comment.\nHowever, I don't think the additional comment is really warranted\nhere, as the other typical usage settings are not commented, and all\nsettings are explained in the surrounding documentation.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Wed, 21 Apr 2021 11:10:36 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Tiny update to pg_stat_statements documentation" }, { "msg_contents": "On Wed, Apr 21, 2021 at 11:10:36AM +1000, Greg Nancarrow wrote:\n> However, I don't think the additional comment is really warranted\n> here, as the other typical usage settings are not commented, and all\n> settings are explained in the surrounding documentation.\n\nGood catch, Greg. I agree to keep things simple and just do what you\nare suggesting here.\n--\nMichael", "msg_date": "Wed, 21 Apr 2021 10:27:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Tiny update to pg_stat_statements documentation" }, { "msg_contents": "On Wed, Apr 21, 2021 at 6:40 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Tue, Apr 20, 2021 at 8:06 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > I've attached a patch for this.\n> >\n> > +1. How about mentioning something like below?\n> >\n> > +compute_query_id = on # when in-core query identifier computation is\n> > desired, otherwise off.\n> >\n>\n> Hmm, I think that comment is perhaps slightly misleading, as\n> compute_query_id wouldn't be set to \"off\" in settings for \"typical\n> usage\".\n> Just saying \"use in-core query identifier computation\" would be a\n> better comment.\n> However, I don't think the additional comment is really warranted\n> here, as the other typical usage settings are not commented, and all\n> settings are explained in the surrounding documentation.\n\nThanks Greg! I agree with you and withdraw my point.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 21 Apr 2021 07:01:27 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Tiny update to pg_stat_statements documentation" }, { "msg_contents": "On Wed, Apr 21, 2021 at 10:27:43AM +0900, Michael Paquier wrote:\n> On Wed, Apr 21, 2021 at 11:10:36AM +1000, Greg Nancarrow wrote:\n> > However, I don't think the additional comment is really warranted\n> > here, as the other typical usage settings are not commented, and all\n> > settings are explained in the surrounding documentation.\n> \n> Good catch, Greg.\n\nAgreed!\n\n> I agree to keep things simple and just do what you\n> are suggesting here.\n\n+1, it looks good to me.\n\n\n", "msg_date": "Wed, 21 Apr 2021 09:46:59 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Tiny update to pg_stat_statements documentation" }, { "msg_contents": "On Wed, Apr 21, 2021 at 09:46:59AM +0800, Julien Rouhaud wrote:\n> +1, it looks good to me.\n\nCool, thanks for confirming. The top of the docs have IMO enough\ndetails about the requirements around compute_query_id and third-part\nmodules, so we are done here.\n--\nMichael", "msg_date": "Wed, 21 Apr 2021 12:11:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Tiny update to pg_stat_statements documentation" } ]
[ { "msg_contents": "Hello PostgreSQL Hackers,\n\nis it possible to preserve the PostgreSQL statistics on a server crash?\n\nSteps to reproduce the behaviour:\n1) Observe the statistics counters, take note\n2) Crash the machine, e.g. with sysrq; perhaps kill -9 on postgresql will\nalready suffice\n3) After recovery, observe the statistics counter again. Have they been\nreset to zero (Bad) or are they preserved (Good).\n\nResetting the counters to zero harms execution planning and auto_vacuum\noperations. That can cause growth of database as dead tuples are not removed\nat the right time. In the end the database can go offline if autovacuum\nnever runs.\n\nAs far as I've checked, this would have to be implemented.\n\nMy question would be whether there is something that would make this\nimpossible to implement, and if there isn't, I'd like this to be considered\na feature request.\n\n\nRegards\n\n-- \nPatrik Novotný\nAssociate Software Engineer\nRed Hat\npanovotn@redhat.com\n\nHello PostgreSQL Hackers,is it possible to preserve the PostgreSQL statistics on a server crash?Steps to reproduce the behaviour:1) Observe the statistics counters, take note2) Crash the machine, e.g. with sysrq; perhaps kill -9 on postgresql will already suffice3) After recovery, observe the statistics counter again. Have they been reset to zero (Bad) or are they preserved (Good).Resetting the counters to zero harms execution planning and auto_vacuumoperations. That can cause growth of database as dead tuples are not removedat the right time. In the end the database can go offline if autovacuum never runs.As far as I've checked, this would have to be implemented.My question would be whether there is something that would make this impossible to implement, and if there isn't, I'd like this to be considered a feature request.Regards-- Patrik NovotnýAssociate Software EngineerRed Hatpanovotn@redhat.com", "msg_date": "Tue, 20 Apr 2021 13:59:44 +0200", "msg_from": "Patrik Novotny <panovotn@redhat.com>", "msg_from_op": true, "msg_subject": "RFE: Make statistics robust for unplanned events" }, { "msg_contents": "On Tue, Apr 20, 2021 at 5:00 AM Patrik Novotny <panovotn@redhat.com> wrote:\n> As far as I've checked, this would have to be implemented.\n>\n> My question would be whether there is something that would make this impossible to implement, and if there isn't, I'd like this to be considered a feature request.\n\nI agree with you.\n\nMaybe crash safety would require some care in cases where autovacuum\nruns very frequently, so that the overhead isn't too high. But\noverall, non-crash-safe information that drives autovacuum is penny\nwise, pound foolish.\n\nI'm sure that it doesn't matter that much most of the time, but there\nare probably workloads and use cases where it causes significant and\npersistent problems. That's not the right trade-off IMV.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 20 Apr 2021 14:25:57 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: RFE: Make statistics robust for unplanned events" }, { "msg_contents": "On Tue, Apr 20, 2021 at 2:00 PM Patrik Novotny <panovotn@redhat.com> wrote:\n>\n> Hello PostgreSQL Hackers,\n>\n> is it possible to preserve the PostgreSQL statistics on a server crash?\n>\n> Steps to reproduce the behaviour:\n> 1) Observe the statistics counters, take note\n> 2) Crash the machine, e.g. with sysrq; perhaps kill -9 on postgresql will already suffice\n> 3) After recovery, observe the statistics counter again. Have they been reset to zero (Bad) or are they preserved (Good).\n>\n> Resetting the counters to zero harms execution planning and auto_vacuum\n> operations. That can cause growth of database as dead tuples are not removed\n> at the right time. In the end the database can go offline if autovacuum never runs.\n\nThe stats for the planner are store persistently in pg_stats though,\nbut autovacuum definitely takes a hit from it, and several other\nthings can too.\n\n> As far as I've checked, this would have to be implemented.\n>\n> My question would be whether there is something that would make this impossible to implement, and if there isn't, I'd like this to be considered a feature request.\n\nI'm pretty sure everybody would *want* this. At least nobody would be\nagainst it. The problem is the potential performance cost of it.\n\nAndres mentioned at least once over in the thread about shared memory\nstats collection that being able to have persistent stats could come\nout of that one in the future. Whatever is done on the topic should\nprobably be done based on that work, as it provides a better starting\npoint and also one that will stay around.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Wed, 21 Apr 2021 14:38:44 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: RFE: Make statistics robust for unplanned events" }, { "msg_contents": "\n\nOn 4/21/21 2:38 PM, Magnus Hagander wrote:\n> On Tue, Apr 20, 2021 at 2:00 PM Patrik Novotny <panovotn@redhat.com> wrote:\n>>\n>> Hello PostgreSQL Hackers,\n>>\n>> is it possible to preserve the PostgreSQL statistics on a server crash?\n>>\n>> Steps to reproduce the behaviour:\n>> 1) Observe the statistics counters, take note\n>> 2) Crash the machine, e.g. with sysrq; perhaps kill -9 on postgresql will already suffice\n>> 3) After recovery, observe the statistics counter again. Have they been reset to zero (Bad) or are they preserved (Good).\n>>\n>> Resetting the counters to zero harms execution planning and auto_vacuum\n>> operations. That can cause growth of database as dead tuples are not removed\n>> at the right time. In the end the database can go offline if autovacuum never runs.\n> \n> The stats for the planner are store persistently in pg_stats though,\n> but autovacuum definitely takes a hit from it, and several other\n> things can too.\n> \n>> As far as I've checked, this would have to be implemented.\n>>\n\nI think the problem with planner stats is that after reset of the\nruntime stats we lose info about which tables may need analyze etc. and\nthen fail to run ANALYZE in time. Which may have negative impact on\nperformance, of course.\n\n>> My question would be whether there is something that would make \n>> this impossible to implement, and if there isn't, I'd like this to\n>> be considered a feature request.\n> \n> I'm pretty sure everybody would *want* this. At least nobody would be\n> against it. The problem is the potential performance cost of it.\n> \n> Andres mentioned at least once over in the thread about shared memory\n> stats collection that being able to have persistent stats could come\n> out of that one in the future. Whatever is done on the topic should\n> probably be done based on that work, as it provides a better starting\n> point and also one that will stay around.\n> \n\nRight. I think the other question is how often does this happen in\npractice - if your instance crashes often enough to make this an issue,\nthen there are probably bigger issues.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 21 Apr 2021 17:02:05 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: RFE: Make statistics robust for unplanned events" }, { "msg_contents": "On Wed, Apr 21, 2021 at 5:02 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n>\n> On 4/21/21 2:38 PM, Magnus Hagander wrote:\n> > On Tue, Apr 20, 2021 at 2:00 PM Patrik Novotny <panovotn@redhat.com> wrote:\n> >>\n> >> Hello PostgreSQL Hackers,\n> >>\n> >> is it possible to preserve the PostgreSQL statistics on a server crash?\n> >>\n> >> Steps to reproduce the behaviour:\n> >> 1) Observe the statistics counters, take note\n> >> 2) Crash the machine, e.g. with sysrq; perhaps kill -9 on postgresql will already suffice\n> >> 3) After recovery, observe the statistics counter again. Have they been reset to zero (Bad) or are they preserved (Good).\n> >>\n> >> Resetting the counters to zero harms execution planning and auto_vacuum\n> >> operations. That can cause growth of database as dead tuples are not removed\n> >> at the right time. In the end the database can go offline if autovacuum never runs.\n> >\n> > The stats for the planner are store persistently in pg_stats though,\n> > but autovacuum definitely takes a hit from it, and several other\n> > things can too.\n> >\n> >> As far as I've checked, this would have to be implemented.\n> >>\n>\n> I think the problem with planner stats is that after reset of the\n> runtime stats we lose info about which tables may need analyze etc. and\n> then fail to run ANALYZE in time. Which may have negative impact on\n> performance, of course.\n>\n> >> My question would be whether there is something that would make\n> >> this impossible to implement, and if there isn't, I'd like this to\n> >> be considered a feature request.\n> >\n> > I'm pretty sure everybody would *want* this. At least nobody would be\n> > against it. The problem is the potential performance cost of it.\n> >\n> > Andres mentioned at least once over in the thread about shared memory\n> > stats collection that being able to have persistent stats could come\n> > out of that one in the future. Whatever is done on the topic should\n> > probably be done based on that work, as it provides a better starting\n> > point and also one that will stay around.\n> >\n>\n> Right. I think the other question is how often does this happen in\n> practice - if your instance crashes often enough to make this an issue,\n> then there are probably bigger issues.\n\nAgreed.\n\nI think the bigger problem there is replication failover, but that's\nalso a different issue (keeping the statistics from the *standby*\nwouldn't help you much there, you'd need to replicate it from the\nprimary).\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Wed, 21 Apr 2021 17:04:42 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: RFE: Make statistics robust for unplanned events" }, { "msg_contents": "On Wed, Apr 21, 2021 at 5:05 PM Magnus Hagander <magnus@hagander.net> wrote:\n\n>\n> > Right. I think the other question is how often does this happen in\n> > practice - if your instance crashes often enough to make this an issue,\n> > then there are probably bigger issues.\n>\n> Agreed.\n>\n> I think the bigger problem there is replication failover, but that's\n> also a different issue (keeping the statistics from the *standby*\n> wouldn't help you much there, you'd need to replicate it from the\n> primary).\n>\n> --\n> Magnus Hagander\n> Me: https://www.hagander.net/\n> Work: https://www.redpill-linpro.com/\n>\n>\nThe report that I've received regarding this RFE has been triggered by\nexperiencing issues with long term deployments in a large scale industrial\nenvironment. The point of this RFE is to be protected against those issues\nin the future. While this doesn't seem to be a very frequent occurrence, I\nwouldn't consider this a corner case not being worth attention.\n\nIf there is an expectation for the performance loss to be less of a problem\nin the future, would it make sense to make this an opt-in feature until\nthen?\n\n-- \nPatrik Novotný\nAssociate Software Engineer\nRed Hat\npanovotn@redhat.com\n\nOn Wed, Apr 21, 2021 at 5:05 PM Magnus Hagander <magnus@hagander.net> wrote:\n> Right. I think the other question is how often does this happen in\n> practice - if your instance crashes often enough to make this an issue,\n> then there are probably bigger issues.\n\nAgreed.\n\nI think the bigger problem there is replication failover, but that's\nalso a different issue (keeping the statistics from the *standby*\nwouldn't help you much there, you'd need to replicate it from the\nprimary).\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\nThe report that I've received regarding this RFE has been triggered by experiencing issues with long term deployments in a large scale industrial environment. The point of this RFE is to be protected against those issues in the future. While this doesn't seem to be a very frequent occurrence, I wouldn't consider this a corner case not being worth attention.If there is an expectation for the performance loss to be less of a problem in the future, would it make sense to make this an opt-in feature until then?-- Patrik NovotnýAssociate Software EngineerRed Hatpanovotn@redhat.com", "msg_date": "Thu, 22 Apr 2021 10:57:37 +0200", "msg_from": "Patrik Novotny <panovotn@redhat.com>", "msg_from_op": true, "msg_subject": "Re: RFE: Make statistics robust for unplanned events" }, { "msg_contents": "On Wed, Apr 21, 2021 at 5:39 AM Magnus Hagander <magnus@hagander.net> wrote:\n> I'm pretty sure everybody would *want* this. At least nobody would be\n> against it. The problem is the potential performance cost of it.\n\nVACUUM remembers vacrel->new_live_tuples as the pg_class.reltuples for\nthe heap relation being vacuumed. It also remembers new_rel_pages in\npg_class (see vac_update_relstats()). However, it does not remember\nvacrel->new_dead_tuples in pg_class or in any other durable location\n(the information gets remembered via a call to pgstat_report_vacuum()\ninstead).\n\nWe already *almost* pay the full cost of durably storing the\ninformation used by autovacuum.c's relation_needs_vacanalyze() to\ndetermine if a VACUUM is required -- we're only missing\nnew_dead_tuples/tabentry->n_dead_tuples. Why not go one tiny baby step\nfurther to fix this issue?\n\nAdmittedly, storing new_dead_tuples durably is not sufficient to allow\nANALYZE to be launched on schedule when there is a hard crash. It is\nalso insufficient to make sure that insert-driven autovacuums get\nlaunched on schedule. Even still, I'm pretty sure that just making\nsure that we store it durably (alongside pg_class.reltuples?) will\nimpose only a modest additional cost, while fixing Patrik's problem.\nThat seems likely to be worth it.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 22 Apr 2021 15:22:49 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: RFE: Make statistics robust for unplanned events" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> We already *almost* pay the full cost of durably storing the\n> information used by autovacuum.c's relation_needs_vacanalyze() to\n> determine if a VACUUM is required -- we're only missing\n> new_dead_tuples/tabentry->n_dead_tuples. Why not go one tiny baby step\n> further to fix this issue?\n\nDefinitely worth thinking about, but I'm a little confused about how\nyou see this working. Those pg_class fields are updated by vacuum\n(or analyze) itself. How could they usefully serve as input to\nautovacuum's decisions?\n\n> Admittedly, storing new_dead_tuples durably is not sufficient to allow\n> ANALYZE to be launched on schedule when there is a hard crash. It is\n> also insufficient to make sure that insert-driven autovacuums get\n> launched on schedule.\n\nI'm not that worried about the former case, but the latter seems\nlike kind of a problem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Apr 2021 18:35:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RFE: Make statistics robust for unplanned events" }, { "msg_contents": "Hi,\n\nOn 2021-04-21 14:38:44 +0200, Magnus Hagander wrote:\n> Andres mentioned at least once over in the thread about shared memory\n> stats collection that being able to have persistent stats could come\n> out of that one in the future. Whatever is done on the topic should\n> probably be done based on that work, as it provides a better starting\n> point and also one that will stay around.\n\nYea. I think the main benefit from the shared memory stats patch that\nwould make this a easier is that it tracks (with one small hole that can\nprobably be addressed) dropped objects accurately, even across crashes /\nreplication. Having old stats around runs into danger of mixing stats\nfor an old dropped object being combined with stats for a new object.\n\nI don't think making pgstat.c fully durable by continually storing the\ndata in a table or something like that is an option. For one, the stats\nfor a replica and primary are independent. For another, the overhead\nwould be prohibitive.\n\nBut after we gain the accurate dropping of stats we can store a stats\nsnapshot corresponding to certain WAL records (by serializing to\nsomething like pg_stats_%redo_lsn%) without ending up with dropped stats\nsurviving.\n\nA big question around this is how often we'd want to write out the\nstats. Obviously, the more often we do, the higher the overhead. And the\nless frequently, the more stats updates might be lost.\n\n\nPatrik, for your use cases, would loosing at most the stats since the\nstart of last checkpoint be an issue?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 22 Apr 2021 15:41:42 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: RFE: Make statistics robust for unplanned events" }, { "msg_contents": "On Fri, Apr 23, 2021 at 12:41 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-04-21 14:38:44 +0200, Magnus Hagander wrote:\n> > Andres mentioned at least once over in the thread about shared memory\n> > stats collection that being able to have persistent stats could come\n> > out of that one in the future. Whatever is done on the topic should\n> > probably be done based on that work, as it provides a better starting\n> > point and also one that will stay around.\n>\n> Yea. I think the main benefit from the shared memory stats patch that\n> would make this a easier is that it tracks (with one small hole that can\n> probably be addressed) dropped objects accurately, even across crashes /\n> replication. Having old stats around runs into danger of mixing stats\n> for an old dropped object being combined with stats for a new object.\n>\n> I don't think making pgstat.c fully durable by continually storing the\n> data in a table or something like that is an option. For one, the stats\n> for a replica and primary are independent. For another, the overhead\n> would be prohibitive.\n>\n> But after we gain the accurate dropping of stats we can store a stats\n> snapshot corresponding to certain WAL records (by serializing to\n> something like pg_stats_%redo_lsn%) without ending up with dropped stats\n> surviving.\n>\n> A big question around this is how often we'd want to write out the\n> stats. Obviously, the more often we do, the higher the overhead. And the\n> less frequently, the more stats updates might be lost.\n\nYeah, that's what I was thinking as well -- dumping snapshot at\nregular intervals, so that on crash recovery we lose a \"controlled\namount\" of recent starts instead of losing *everything*.\n\nI think in most situations a fairly long interval is OK -- if you have\ntables that take so many hits that you need a really quick reaction\nfrom autovacuum it will probably pick that up quickly enough even\nafter a reset. And if it's moer the long-term tracking that's\nimportant, then a longer interval is probably OK.\n\nBut perhaps make it configurable with a timeout and a \"reasonable default\"?\n\n\n> Patrik, for your use cases, would loosing at most the stats since the\n> start of last checkpoint be an issue?\n\nUnless there's a particular benefit to tie it specifically to the\ncheckpoint occuring, I'd rather keep it as a separate setting. They\nmight both come with the same default of course, btu I can certainly\nenvision cases where you want to increase the checkpoint distance\nwhile keeping the stats interval lower for example. Many people\nincrease the checkpoint timeout quite a lot...\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Fri, 23 Apr 2021 10:21:20 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: RFE: Make statistics robust for unplanned events" }, { "msg_contents": ">\n>\n> Yeah, that's what I was thinking as well -- dumping snapshot at\n> regular intervals, so that on crash recovery we lose a \"controlled\n> amount\" of recent starts instead of losing *everything*.\n>\n> I think in most situations a fairly long interval is OK -- if you have\n> tables that take so many hits that you need a really quick reaction\n> from autovacuum it will probably pick that up quickly enough even\n> after a reset. And if it's moer the long-term tracking that's\n> important, then a longer interval is probably OK.\n>\n> But perhaps make it configurable with a timeout and a \"reasonable default\"?\n>\n>\n> > Patrik, for your use cases, would loosing at most the stats since the\n> > start of last checkpoint be an issue?\n>\n> Unless there's a particular benefit to tie it specifically to the\n> checkpoint occuring, I'd rather keep it as a separate setting. They\n> might both come with the same default of course, btu I can certainly\n> envision cases where you want to increase the checkpoint distance\n> while keeping the stats interval lower for example. Many people\n> increase the checkpoint timeout quite a lot...\n>\n>\n From what I understand, I think it depends on the interval of those\ncheckpoints. If the interval was configurable with the mentioned reasonable\ndefault, then it shouldn't be an issue.\n\nIf we were to choose a hard coded interval of those checkpoints based on my\ncase, I would have to consult the original reporter, but then it might not\nsuit others anyway. Therefore, making it configurable makes more sense to\nme personally.\n\n-- \nPatrik Novotný\nAssociate Software Engineer\nRed Hat\npanovotn@redhat.com\n\nYeah, that's what I was thinking as well -- dumping snapshot at\nregular intervals, so that on crash recovery we lose a \"controlled\namount\" of recent starts instead of losing *everything*.\n\nI think in most situations a fairly long interval is OK -- if you have\ntables that take so many hits that you need a really quick reaction\nfrom autovacuum it will probably pick that up quickly enough even\nafter a reset. And if it's moer the long-term tracking that's\nimportant, then a longer interval is probably OK.\n\nBut perhaps make it configurable with a timeout and a \"reasonable default\"?\n\n\n> Patrik, for your use cases, would loosing at most the stats since the\n> start of last checkpoint be an issue?\n\nUnless there's a particular benefit to tie it specifically to the\ncheckpoint occuring, I'd rather keep it as a separate setting. They\nmight both come with the same default of course, btu I can certainly\nenvision cases where you want to increase the checkpoint distance\nwhile keeping the stats interval lower for example. Many people\nincrease the checkpoint timeout quite a lot...\nFrom what I understand, I think it depends on the interval of those checkpoints. If the interval was configurable with the mentioned reasonable default, then it shouldn't be an issue.If we were to choose a hard coded interval of those checkpoints based on my case, I would have to consult the original reporter, but then it might not suit others anyway. Therefore, making it configurable makes more sense to me personally.-- Patrik NovotnýAssociate Software EngineerRed Hatpanovotn@redhat.com", "msg_date": "Fri, 23 Apr 2021 10:51:02 +0200", "msg_from": "Patrik Novotny <panovotn@redhat.com>", "msg_from_op": true, "msg_subject": "Re: RFE: Make statistics robust for unplanned events" }, { "msg_contents": "On Thu, Apr 22, 2021 at 3:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > We already *almost* pay the full cost of durably storing the\n> > information used by autovacuum.c's relation_needs_vacanalyze() to\n> > determine if a VACUUM is required -- we're only missing\n> > new_dead_tuples/tabentry->n_dead_tuples. Why not go one tiny baby step\n> > further to fix this issue?\n>\n> Definitely worth thinking about, but I'm a little confused about how\n> you see this working. Those pg_class fields are updated by vacuum\n> (or analyze) itself. How could they usefully serve as input to\n> autovacuum's decisions?\n\nHonestly, the details weren't very well thought out. My basic point\nstill stands, which is that it shouldn't be *that* expensive to make\nthe relevant information crash-safe, which would protect the system\nfrom certain pathological cases. Maybe it doesn't even have to be\ncrash-safe in the way that pg_class.reltuples is -- something\napproximate might work quite well. Assuming that there are no dead\ntuples after a crash seems rather naive.\n\nI seem to recall that certain init scripts I saw years ago used\nImmediate Shutdown mode very frequently. Stuff like that is bound to\nhappen in some installations, and so we should protect users from\nhard-to-foresee extreme consequences. Sure, using immediate shutdown\nlike that isn't recommended practice, but that's no reason to allow a\nnasty failure mode.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 30 Apr 2021 17:23:06 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: RFE: Make statistics robust for unplanned events" } ]
[ { "msg_contents": "Hi,\n\nI just noticed that a comment for dshash_find() mentions:\n\n\"caller must not lock a lock already\"\n\nSimple patch to rephrase with \"hold a lock\" attached.", "msg_date": "Tue, 20 Apr 2021 20:16:59 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Typo in dshash_find() comments" }, { "msg_contents": "On Tue, Apr 20, 2021 at 2:16 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> I just noticed that a comment for dshash_find() mentions:\n>\n> \"caller must not lock a lock already\"\n>\n> Simple patch to rephrase with \"hold a lock\" attached.\n\nPushed, thanks.\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Tue, 20 Apr 2021 14:37:16 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Typo in dshash_find() comments" }, { "msg_contents": "On Tue, Apr 20, 2021 at 02:37:16PM +0200, Magnus Hagander wrote:\n> On Tue, Apr 20, 2021 at 2:16 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > I just noticed that a comment for dshash_find() mentions:\n> >\n> > \"caller must not lock a lock already\"\n> >\n> > Simple patch to rephrase with \"hold a lock\" attached.\n> \n> Pushed, thanks.\n\nThanks Magnus!\n\n\n", "msg_date": "Tue, 20 Apr 2021 20:39:48 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Typo in dshash_find() comments" } ]
[ { "msg_contents": "\nI've just noticed that we have 41 perl files in our sources with\ncopyright notices of some sort and 161 without. Should we do something\nabout that?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 20 Apr 2021 09:58:53 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Copyright on perl files" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> I've just noticed that we have 41 perl files in our sources with\n> copyright notices of some sort and 161 without. Should we do something\n> about that?\n\n+1 for pasting the usual copyright notice on the rest.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 20 Apr 2021 10:09:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Copyright on perl files" }, { "msg_contents": "\nOn 4/20/21 10:09 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> I've just noticed that we have 41 perl files in our sources with\n>> copyright notices of some sort and 161 without. Should we do something\n>> about that?\n> +1 for pasting the usual copyright notice on the rest.\n>\n> \t\t\t\n\n\n\nDone.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 7 May 2021 11:15:11 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: Copyright on perl files" } ]
[ { "msg_contents": "Hi hackers,\n\nWhile trying to build PostgreSQL from source (master branch, 95c3a195) on a\nMacBook I discovered that `make check` fails:\n\n```\n============== removing existing temp instance ==============\n============== creating temporary instance ==============\n============== initializing database system ==============\n============== starting postmaster ==============\nsh: line 1: 33559 Abort trap: 6 \"psql\" -X postgres < /dev/null 2>\n/dev/null\nsh: line 1: 33562 Abort trap: 6 \"psql\" -X postgres < /dev/null 2>\n/dev/null\n...\nsh: line 1: 33742 Abort trap: 6 \"psql\" -X postgres < /dev/null 2>\n/dev/null\n\npg_regress: postmaster did not respond within 60 seconds\nExamine\n/Users/eax/projects/c/postgresql/src/test/regress/log/postmaster.log for\nthe reason\nmake[1]: *** [check] Error 2\nmake: *** [check] Error 2\n```\n\nA little investigation revealed that pg_regres executes postgres like this:\n\n```\nPATH=\"/Users/eax/projects/c/postgresql/tmp_install/Users/eax/pginstall/bin:$PATH\"\nDYLD_LIBRARY_PATH=\"/Users/eax/projects/c/postgresql/tmp_install/Users/eax/pginstall/lib\"\n\"postgres\" -D\n\"/Users/eax/projects/c/postgresql/src/test/regress/./tmp_check/data\" -F -c\n\"listen_addresses=\" -k \"/Users/eax/pgtmp/pg_regress-S34sXM\" >\n\"/Users/eax/projects/c/postgresql/src/test/regress/log/postmaster.log\"\n```\n\n... and checks that it's online by executing:\n\n```\nPATH=\"/Users/eax/projects/c/postgresql/tmp_install/Users/eax/pginstall/bin:$PATH\"\nDYLD_LIBRARY_PATH=\"/Users/eax/projects/c/postgresql/tmp_install/Users/eax/pginstall/lib\"\npsql -X postgres\n```\n\nThe last command fails with:\n\n```\npsql: error: connection to server on socket \"/tmp/.s.PGSQL.5432\" failed: No\nsuch file or directory. Is the server running locally and accepting\nconnections on that socket?\n```\n\nThis is because the actual path to the socket is:\n\n```\n~/pgtmp/pg_regress-S34sXM/.s.PGSQL.5432\n```\n\nWhile debugging this I also discovered that psql uses\n/usr/lib/libpq.5.dylib library, according to the `image list` command in\nLLDB. The library is provided with the system and can't be moved or\ndeleted. In other words, it seems to ignore DYLD_LIBRARY_PATH. I've found\nan instruction [1] that suggests that this is a behavior of MacOS integrity\nprotection and describes how it can be disabled. Sadly it made no\ndifference in my case, psql still ignores DYLD_LIBRARY_PATH.\n\nWhile I'm still in the progress of investigating this I just wanted to ask\nif anyone is developing on MacOS and observes anything similar and had any\nluck solving the problem? I tried to search through the mailing list but\ndidn't find anything relevant. The complete script that reproduces the\nissue is attached. I'm using the same script on Ubuntu VM, where it works\njust fine.\n\n[1]: https://github.com/rbenv/rbenv/issues/962#issuecomment-275858404\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Tue, 20 Apr 2021 17:57:55 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "`make check` doesn't pass on MacOS Catalina" }, { "msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n> While trying to build PostgreSQL from source (master branch, 95c3a195) on a\n> MacBook I discovered that `make check` fails:\n\nThis is the usual symptom of not having disabled SIP :-(.\n\nIf you don't want to do that, do \"make install\" before \"make check\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 20 Apr 2021 11:02:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: `make check` doesn't pass on MacOS Catalina" }, { "msg_contents": "Hi Tom,\n\nMany thanks, running \"make install\" before \"make check\" helped.\n\n\nOn Tue, Apr 20, 2021 at 6:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Aleksander Alekseev <aleksander@timescale.com> writes:\n> > While trying to build PostgreSQL from source (master branch, 95c3a195) on a\n> > MacBook I discovered that `make check` fails:\n>\n> This is the usual symptom of not having disabled SIP :-(.\n>\n> If you don't want to do that, do \"make install\" before \"make check\".\n>\n> regards, tom lane\n\n\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 20 Apr 2021 18:55:02 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: `make check` doesn't pass on MacOS Catalina" }, { "msg_contents": "\nOn 4/20/21 11:02 AM, Tom Lane wrote:\n> Aleksander Alekseev <aleksander@timescale.com> writes:\n>> While trying to build PostgreSQL from source (master branch, 95c3a195) on a\n>> MacBook I discovered that `make check` fails:\n> This is the usual symptom of not having disabled SIP :-(.\n>\n> If you don't want to do that, do \"make install\" before \"make check\".\n>\n> \t\t\t\n\n\n\n\nFYI the buildfarm client has a '--delay-check' option that does exactly\nthis. It's useful on Alpine Linux as well as MacOS\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 20 Apr 2021 12:06:22 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: `make check` doesn't pass on MacOS Catalina" }, { "msg_contents": "Hi hackers,\n\nThank you very much. I'm facing the same problem yesterday. May I\nsuggest that document it in the installation guide on MacOS platform?\n\nOn 4/21/21, Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n> On 4/20/21 11:02 AM, Tom Lane wrote:\n>> Aleksander Alekseev <aleksander@timescale.com> writes:\n>>> While trying to build PostgreSQL from source (master branch, 95c3a195) on\n>>> a\n>>> MacBook I discovered that `make check` fails:\n>> This is the usual symptom of not having disabled SIP :-(.\n>>\n>> If you don't want to do that, do \"make install\" before \"make check\".\n>>\n>> \t\t\t\n>\n>\n>\n>\n> FYI the buildfarm client has a '--delay-check' option that does exactly\n> this. It's useful on Alpine Linux as well as MacOS\n>\n>\n> cheers\n>\n>\n> andrew\n>\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n>\n>\n\n\n-- \nBest Regards,\nXing\n\n\n", "msg_date": "Wed, 21 Apr 2021 08:53:46 +0800", "msg_from": "Xing GUO <higuoxing@gmail.com>", "msg_from_op": false, "msg_subject": "Re: `make check` doesn't pass on MacOS Catalina" }, { "msg_contents": "Xing GUO <higuoxing@gmail.com> writes:\n> Thank you very much. I'm facing the same problem yesterday. May I\n> suggest that document it in the installation guide on MacOS platform?\n\nIt is documented --- see last para under\n\nhttps://www.postgresql.org/docs/current/installation-platform-notes.html#INSTALLATION-NOTES-MACOS\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 20 Apr 2021 21:15:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: `make check` doesn't pass on MacOS Catalina" }, { "msg_contents": "On 4/21/21, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Xing GUO <higuoxing@gmail.com> writes:\n>> Thank you very much. I'm facing the same problem yesterday. May I\n>> suggest that document it in the installation guide on MacOS platform?\n>\n> It is documented --- see last para under\n>\n> https://www.postgresql.org/docs/current/installation-platform-notes.html#INSTALLATION-NOTES-MACOS\n\nThank you! Sorry for my carelessness...\n\n>\n> \t\t\tregards, tom lane\n>\n\n\n-- \nBest Regards,\nXing\n\n\n", "msg_date": "Wed, 21 Apr 2021 09:24:22 +0800", "msg_from": "Xing GUO <higuoxing@gmail.com>", "msg_from_op": false, "msg_subject": "Re: `make check` doesn't pass on MacOS Catalina" }, { "msg_contents": "On Tue, Apr 20, 2021 at 9:06 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n> On 4/20/21 11:02 AM, Tom Lane wrote:\n> > Aleksander Alekseev <aleksander@timescale.com> writes:\n> >> While trying to build PostgreSQL from source (master branch, 95c3a195) on a\n> >> MacBook I discovered that `make check` fails:\n> > This is the usual symptom of not having disabled SIP :-(.\n> >\n> > If you don't want to do that, do \"make install\" before \"make check\".\n\n> FYI the buildfarm client has a '--delay-check' option that does exactly\n> this. It's useful on Alpine Linux as well as MacOS\n\nI was trying to set up a buildfarm animal, and this exact problem lead\nto a few hours of debugging and hair-pulling. Can the default\nbehaviour be changed in buildfarm client to perform `make check` only\nafter `make install`.\n\nCurrent buildfarm client code looks something like:\n\n make();\n make_check() unless $delay_check;\n ... other steps ...\n make_install();\n ... other steps-2...\n make_check() if $delay_check;\n\nThere are no comments as to why one should choose to use --delay-check\n($delay_check). This email, and the pointer to the paragraph buried in\nthe docs, shared by Tom, are the only two ways one can understand what\nis causing this failure, and how to get around it.\n\nNaive question: What's stopping us from rewriting the code as follows.\n make();\n make_install();\n make_check();\n ... other steps ...\n ... other steps-2...\n # or move make_check() call here\n\nWith a quick google search I could not find why --delay-check is\nnecessary on Apline linux, as well; can you please elaborate.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n", "msg_date": "Sat, 6 Aug 2022 03:49:51 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": false, "msg_subject": "Re: `make check` doesn't pass on MacOS Catalina" }, { "msg_contents": "\nOn 2022-08-06 Sa 06:49, Gurjeet Singh wrote:\n> On Tue, Apr 20, 2021 at 9:06 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> On 4/20/21 11:02 AM, Tom Lane wrote:\n>>> Aleksander Alekseev <aleksander@timescale.com> writes:\n>>>> While trying to build PostgreSQL from source (master branch, 95c3a195) on a\n>>>> MacBook I discovered that `make check` fails:\n>>> This is the usual symptom of not having disabled SIP :-(.\n>>>\n>>> If you don't want to do that, do \"make install\" before \"make check\".\n>> FYI the buildfarm client has a '--delay-check' option that does exactly\n>> this. It's useful on Alpine Linux as well as MacOS\n> I was trying to set up a buildfarm animal, and this exact problem lead\n> to a few hours of debugging and hair-pulling. Can the default\n> behaviour be changed in buildfarm client to perform `make check` only\n> after `make install`.\n>\n> Current buildfarm client code looks something like:\n>\n> make();\n> make_check() unless $delay_check;\n> ... other steps ...\n> make_install();\n> ... other steps-2...\n> make_check() if $delay_check;\n>\n> There are no comments as to why one should choose to use --delay-check\n> ($delay_check). This email, and the pointer to the paragraph buried in\n> the docs, shared by Tom, are the only two ways one can understand what\n> is causing this failure, and how to get around it.\n>\n> Naive question: What's stopping us from rewriting the code as follows.\n> make();\n> make_install();\n> make_check();\n> ... other steps ...\n> ... other steps-2...\n> # or move make_check() call here\n>\n> With a quick google search I could not find why --delay-check is\n> necessary on Apline linux, as well; can you please elaborate.\n>\n\nI came across this when I was working on setting up some Dockerfiles for\nthe buildfarm. Apparently LD_LIBRARY_PATH doesn't work on Alpine, at\nleast out of the box, as it uses a different linker, and \"make check\"\nrelies on it (or the moral equivalent) if \"make install\" hasn't been run.\n\nIn general we want to run \"make check\" as soon as possible after running\n\"make\" on the core code. That's why I didn't simply delay it\nunconditionally.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 6 Aug 2022 09:51:50 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: `make check` doesn't pass on MacOS Catalina" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-08-06 Sa 06:49, Gurjeet Singh wrote:\n>> There are no comments as to why one should choose to use --delay-check\n>> ($delay_check). This email, and the pointer to the paragraph buried in\n>> the docs, shared by Tom, are the only two ways one can understand what\n>> is causing this failure, and how to get around it.\n\n> In general we want to run \"make check\" as soon as possible after running\n> \"make\" on the core code. That's why I didn't simply delay it\n> unconditionally.\n\nIn general --- that is, on non-broken platforms --- \"make check\"\n*should* work without a prior \"make install\". I am absolutely\nnot in favor of changing the buildfarm so that it fails to detect\nthe problem if we break that. But for sure it'd make sense to add\nsome comments to the wiki and/or sample config file explaining\nthat you need to set this option on systems X,Y,Z.\n\nOn macOS you need to use it if you haven't disabled SIP.\nI don't have the details about any other problem platforms.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 06 Aug 2022 10:41:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: `make check` doesn't pass on MacOS Catalina" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> I came across this when I was working on setting up some Dockerfiles for\n> the buildfarm. Apparently LD_LIBRARY_PATH doesn't work on Alpine, at\n> least out of the box, as it uses a different linker, and \"make check\"\n> relies on it (or the moral equivalent) if \"make install\" hasn't been run.\n\nI did some quick googling on this point. We seem not to be the only\nproject having linking issues on Alpine, and yet it does support\nLD_LIBRARY_PATH according to some fairly authoritative-looking pages, eg\n\nhttps://www.musl-libc.org/doc/1.0.0/manual.html\n\nI suspect the situation is similar to macOS, ie there is some limitation\nsomewhere on whether LD_LIBRARY_PATH gets passed through. If memory\nserves, the problem on SIP-enabled Mac is that DYLD_LIBRARY_PATH is\ncleared upon invoking bash, so that we lose it anywhere that \"make\"\ninvokes a shell to run a subprogram. (Hmm ... I wonder whether ninja\nuses the shell ...) I don't personally care at all about Alpine, but\nmaybe somebody who does could dig a little harder and characterize\nthe problem there better.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 06 Aug 2022 11:25:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: `make check` doesn't pass on MacOS Catalina" }, { "msg_contents": "Hi,\n\nOn 2022-08-06 11:25:09 -0400, Tom Lane wrote:\n> (Hmm ... I wonder whether ninja uses the shell ...)\n\nIt does, but even if it didn't, we'd use a shell somewhere below perl or\npg_regress :(.\n\nThe meson build should still work without disabling SIP, I did the necessary\nhackery to set up the rpath equivalent up relatively. So both the real install\ntarget and the tmp_install/ should find libraries within themselves.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 6 Aug 2022 08:32:49 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: `make check` doesn't pass on MacOS Catalina" }, { "msg_contents": "\nOn 2022-08-06 Sa 11:25, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> I came across this when I was working on setting up some Dockerfiles for\n>> the buildfarm. Apparently LD_LIBRARY_PATH doesn't work on Alpine, at\n>> least out of the box, as it uses a different linker, and \"make check\"\n>> relies on it (or the moral equivalent) if \"make install\" hasn't been run.\n> I did some quick googling on this point. We seem not to be the only\n> project having linking issues on Alpine, and yet it does support\n> LD_LIBRARY_PATH according to some fairly authoritative-looking pages, eg\n>\n> https://www.musl-libc.org/doc/1.0.0/manual.html\n>\n> I suspect the situation is similar to macOS, ie there is some limitation\n> somewhere on whether LD_LIBRARY_PATH gets passed through. If memory\n> serves, the problem on SIP-enabled Mac is that DYLD_LIBRARY_PATH is\n> cleared upon invoking bash, so that we lose it anywhere that \"make\"\n> invokes a shell to run a subprogram. (Hmm ... I wonder whether ninja\n> uses the shell ...) I don't personally care at all about Alpine, but\n> maybe somebody who does could dig a little harder and characterize\n> the problem there better.\n>\n> \t\t\t\n\n\nWe probably should care about Alpine, because it's a good distro to use\nas the basis for Docker images, being fairly secure, very small, and\nbooting very fast.\n\nI'll dig some more, and possibly set up a (docker based) buildfarm instance.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 6 Aug 2022 12:10:55 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: `make check` doesn't pass on MacOS Catalina" }, { "msg_contents": "\nOn 2022-08-06 Sa 12:10, Andrew Dunstan wrote:\n> On 2022-08-06 Sa 11:25, Tom Lane wrote:\n>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>> I came across this when I was working on setting up some Dockerfiles for\n>>> the buildfarm. Apparently LD_LIBRARY_PATH doesn't work on Alpine, at\n>>> least out of the box, as it uses a different linker, and \"make check\"\n>>> relies on it (or the moral equivalent) if \"make install\" hasn't been run.\n>> I did some quick googling on this point. We seem not to be the only\n>> project having linking issues on Alpine, and yet it does support\n>> LD_LIBRARY_PATH according to some fairly authoritative-looking pages, eg\n>>\n>> https://www.musl-libc.org/doc/1.0.0/manual.html\n>>\n>> I suspect the situation is similar to macOS, ie there is some limitation\n>> somewhere on whether LD_LIBRARY_PATH gets passed through. If memory\n>> serves, the problem on SIP-enabled Mac is that DYLD_LIBRARY_PATH is\n>> cleared upon invoking bash, so that we lose it anywhere that \"make\"\n>> invokes a shell to run a subprogram. (Hmm ... I wonder whether ninja\n>> uses the shell ...) I don't personally care at all about Alpine, but\n>> maybe somebody who does could dig a little harder and characterize\n>> the problem there better.\n>>\n>> \t\t\t\n>\n> We probably should care about Alpine, because it's a good distro to use\n> as the basis for Docker images, being fairly secure, very small, and\n> booting very fast.\n>\n> I'll dig some more, and possibly set up a (docker based) buildfarm instance.\n>\n>\n\nIt appears that LD_LIBRARY_PATH is supported on Alpine but it fails if\nchained, which seems somewhat braindead. The regression tests get errors\nlike this:\n\n\n+ERROR:  could not load library\n\"/app/buildroot/HEAD/pgsql.build/tmp_install/app/buildroot/HEAD/inst/lib/postgresql/libpqwalreceiver.so\":\nError loading shared library libpq.so.5: No such file or directory\n(needed by\n/app/buildroot/HEAD/pgsql.build/tmp_install/app/buildroot/HEAD/inst/lib/postgresql/libpqwalreceiver.so)\n\n\nIf the check stage is delayed until after the install stage the tests pass.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 8 Aug 2022 10:56:21 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: `make check` doesn't pass on MacOS Catalina" } ]
[ { "msg_contents": "Hi,\n\nIt looks like even though the commit e5253fdc4f that added the\nparallel_leader_participation GUC correctly categorized it as\nRESOURCES_ASYNCHRONOUS parameter in the code, but in the docs it is kept\nunder irrelevant section i.e. \"Query Planning/Other Planner Options\". This\nis reported in the bugs list [1], cc-ed the reporter.\n\nAttaching a small patch that moves the GUC description to the right place.\nThoughts?\n\n[1]\nhttps://www.postgresql.org/message-id/16972-42d4b0c15aa1d5f5%40postgresql.org\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 20 Apr 2021 21:16:49 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Docs: Move parallel_leader_participation GUC description under\n relevant category" }, { "msg_contents": "On Tue, Apr 20, 2021 at 09:16:49PM +0530, Bharath Rupireddy wrote:\n> It looks like even though the commit e5253fdc4f that added the\n> parallel_leader_participation GUC correctly categorized it as\n> RESOURCES_ASYNCHRONOUS parameter in the code, but in the docs it is kept\n> under irrelevant section i.e. \"Query Planning/Other Planner Options\". This\n> is reported in the bugs list [1], cc-ed the reporter.\n> \n> Attaching a small patch that moves the GUC description to the right place.\n> Thoughts?\n\nI would keep the discussion on the existing thread rather than spawn a\nnew one on -hackers for exactly the same problem, so I'll reply there\nin a minute.\n--\nMichael", "msg_date": "Wed, 21 Apr 2021 11:30:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Docs: Move parallel_leader_participation GUC description under\n relevant category" }, { "msg_contents": "On Wed, Apr 21, 2021 at 8:00 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Apr 20, 2021 at 09:16:49PM +0530, Bharath Rupireddy wrote:\n> > It looks like even though the commit e5253fdc4f that added the\n> > parallel_leader_participation GUC correctly categorized it as\n> > RESOURCES_ASYNCHRONOUS parameter in the code, but in the docs it is kept\n> > under irrelevant section i.e. \"Query Planning/Other Planner Options\". This\n> > is reported in the bugs list [1], cc-ed the reporter.\n> >\n> > Attaching a small patch that moves the GUC description to the right place.\n> > Thoughts?\n>\n> I would keep the discussion on the existing thread rather than spawn a\n> new one on -hackers for exactly the same problem, so I'll reply there\n> in a minute.\n\nI thought we might miss the discussion in the hackers list. I'm sorry\nfor starting a new thread. I'm closing this thread.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 21 Apr 2021 08:05:37 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Docs: Move parallel_leader_participation GUC description under\n relevant category" } ]
[ { "msg_contents": "Hi,\n\nDuring commits, and some other places, there's a short phase at which we\nblock checkpoints from starting:\n\n\t\t/*\n\t\t * Mark ourselves as within our \"commit critical section\". This\n\t\t * forces any concurrent checkpoint to wait until we've updated\n\t\t * pg_xact. Without this, it is possible for the checkpoint to set\n\t\t * REDO after the XLOG record but fail to flush the pg_xact update to\n\t\t * disk, leading to loss of the transaction commit if the system\n\t\t * crashes a little later.\n\nOne problem in the shared memory stats patch was that, to get rid of the\nO(N) cost of pgstat_vacuum_stat(), commits/aborts should inform which\nstats they drop.\n\nBecause we wouldn't do the dropping of stats as part of\nRecordTransactionCommit()'s critical section, that would have the danger\nof the stats dropping not being executed if we crash after WAL logging\nthe commit record, but before dropping the stats.\n\nIt's worthwhile to note that currently dropping of relfilenodes (e.g. a\ncommitting DROP TABLE or an aborting CREATE TABLE) has the same issue.\n\n\nAn obvious way to address that would be to set delayChkpt not just for\npart of RecordTransactionCommit()/Abort(), but also during the\nrelfilenode/stats dropping. But obviously that'd make it much more\nlikely that we'd actually prevent checkpoints from starting for a\nsignificant amount of time.\n\nWhich lead me to wonder why we need to *block* when starting a\ncheckpoint, waiting for a moment in which there are no concurrent\ncommits?\n\nI think we could replace the boolean per-backend delayChkpt with\nper-backend LSNs that indicate an LSN that for the backend won't cause\nrecovery issues. For commits this LSN could e.g. be the current WAL\ninsert location, just before the XLogInsert() (but I think we could\noptimize that a bit, but that's details). CreateCheckPoint() would then\nnot loop over HaveVirtualXIDsDelayingChkpt() before completing a\ncheckpoint, but instead compute the oldest LSN that any backend needs to\nbe included in the checkpoint.\n\nMoving the redo pointer to before where any backend is in a commit\ncritical section seems to provide sufficient (and I think sometimes\nstronger) protection against the hazards that delayChkpt aims to\nprevent? And it could do so without blocking.\n\n\nI think the blocking by delayChkpt is already an issue in some busy\nworkloads, although it's hard to tell how much outside of artificial\nworkloads against modified versions of PG, given that we don't expose\nsuch waits anywhere. Particularly that we now set delayChkpt in\nMarkBufferDirtyHint() seems to make that a lot more likely.\n\n\nDoes this seem like a viable idea, or did I entirely miss the boat?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 20 Apr 2021 18:56:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "non-blocking delayChkpt" } ]
[ { "msg_contents": "Hi,\n\nI'm reading the pull_up_sublinks, and find the below comments.\n\n * However, this optimization *only*\n * works at the top level of WHERE or a JOIN/ON clause, because we cannot\n * distinguish whether the ANY ought to return FALSE or NULL in cases\n * involving NULL inputs. Also, in an outer join's ON clause we can only\n * do this if the sublink is degenerate (ie, references only the nullable\n * side of the join).\n\nI tried to write some SQLs but still can't understand the above comments.\nAny\nhelp here?\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nHi,I'm reading the pull_up_sublinks, and find the below comments. * However, this optimization *only* * works at the top level of WHERE or a JOIN/ON clause, because we cannot * distinguish whether the ANY ought to return FALSE or NULL in cases * involving NULL inputs. Also, in an outer join's ON clause we can only * do this if the sublink is degenerate (ie, references only the nullable * side of the join).I tried to write some SQLs but still can't understand the above comments. Anyhelp here?-- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Wed, 21 Apr 2021 10:55:29 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "prerequisites of pull_up_sublinks" }, { "msg_contents": "On Wed, 21 Apr 2021 at 14:55, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> * However, this optimization *only*\n> * works at the top level of WHERE or a JOIN/ON clause, because we cannot\n> * distinguish whether the ANY ought to return FALSE or NULL in cases\n> * involving NULL inputs. Also, in an outer join's ON clause we can only\n> * do this if the sublink is degenerate (ie, references only the nullable\n> * side of the join).\n>\n> I tried to write some SQLs but still can't understand the above comments. Any\n> help here?\n\nThe code there is trying to convert sub links into joins.\n\nFor example:\n\nexplain select * from pg_Class where oid in (select attrelid from pg_attribute);\n\ncan be implemented as a join rather than a subplan or hashed subplan.\nYou should either see a Semi Join there or a regular join with the\npg_attribute side uniquified.\n\nCheck the plan when you change the above into NOT IN. We don't\ncurrently pull those up to become joins due to the fact that the null\nbehaviour for NOT IN is not compatible with anti-joins.\n\nDavid\n\n\n", "msg_date": "Wed, 21 Apr 2021 20:37:14 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: prerequisites of pull_up_sublinks" }, { "msg_contents": "On Wed, Apr 21, 2021 at 4:37 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Wed, 21 Apr 2021 at 14:55, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > * However, this optimization *only*\n> > * works at the top level of WHERE or a JOIN/ON clause, because we cannot\n> > * distinguish whether the ANY ought to return FALSE or NULL in cases\n> > * involving NULL inputs. Also, in an outer join's ON clause we can only\n> > * do this if the sublink is degenerate (ie, references only the nullable\n> > * side of the join).\n> >\n> > I tried to write some SQLs but still can't understand the above\n> comments. Any\n> > help here?\n>\n> The code there is trying to convert sub links into joins.\n>\n> For example:\n>\n> explain select * from pg_Class where oid in (select attrelid from\n> pg_attribute);\n>\n> can be implemented as a join rather than a subplan or hashed subplan.\n> You should either see a Semi Join there or a regular join with the\n> pg_attribute side uniquified.\n>\n> Check the plan when you change the above into NOT IN. We don't\n> currently pull those up to become joins due to the fact that the null\n> behaviour for NOT IN is not compatible with anti-joins.\n>\n> I just checked the \"Not In to Join\" thread some days ago, but didn't\nrealize it here. Thank you David for your hint.\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Wed, Apr 21, 2021 at 4:37 PM David Rowley <dgrowleyml@gmail.com> wrote:On Wed, 21 Apr 2021 at 14:55, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>  * However, this optimization *only*\n>  * works at the top level of WHERE or a JOIN/ON clause, because we cannot\n>  * distinguish whether the ANY ought to return FALSE or NULL in cases\n>  * involving NULL inputs. Also, in an outer join's ON clause we can only\n>  * do this if the sublink is degenerate (ie, references only the nullable\n>  * side of the join).\n>\n> I tried to write some SQLs but still can't understand the above comments. Any\n> help here?\n\nThe code there is trying to convert sub links into joins.\n\nFor example:\n\nexplain select * from pg_Class where oid in (select attrelid from pg_attribute);\n\ncan be implemented as a join rather than a subplan or hashed subplan.\nYou should either see a Semi Join there or a regular join with the\npg_attribute side uniquified.\n\nCheck the plan when you change the above into NOT IN.  We don't\ncurrently pull those up to become joins due to the fact that the null\nbehaviour for NOT IN is not compatible with anti-joins.I just checked the \"Not In to Join\" thread some days ago, but didn'trealize it here.  Thank you David for your hint. -- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Wed, 21 Apr 2021 19:47:27 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: prerequisites of pull_up_sublinks" } ]
[ { "msg_contents": "Hello.\n\nI found the following lines in xlogprefetch.c.\n\n> ereport(LOG,\n> (errmsg(\"recovery finished prefetching at %X/%X; \"\n> \"prefetch = \" UINT64_FORMAT \", \"\n> \"skip_hit = \" UINT64_FORMAT \", \"\n...\n\nIt is found in ja.po as\n\n\"recovery finished prefetching at %X/%X; prefetch = \"\n\n. . . .\n\nAnyway we can rely on %lld/%llu and we decided to use them in\ntranslatable strings. So the attached fixes (AFAICS) all instances of\nthe macros in translatable strings.\n\n# I just found 3286065651 did one instance of that so I excluded that\n# from this patch.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 21 Apr 2021 20:00:00 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "INT64_FORMAT in translatable strings" }, { "msg_contents": "On Wed, Apr 21, 2021 at 08:00:00PM +0900, Kyotaro Horiguchi wrote:\n> Anyway we can rely on %lld/%llu and we decided to use them in\n> translatable strings. So the attached fixes (AFAICS) all instances of\n> the macros in translatable strings.\n\nIndeed, good catch. Thanks.\n\n> # I just found 3286065651 did one instance of that so I excluded that\n> # from this patch.\n\nMay I ask why you are using \"unsigned long long int\" rather uint64?\nWhat you are proposing is more consistent with what's done in the\nsigned case like 3286065, so no objections from me, but I was just\nwondering. Personally, I think that I would just use \"unsigned long\nlong\", like in xlogreader.c or pg_controldata.c to take two examples.\n--\nMichael", "msg_date": "Thu, 22 Apr 2021 19:49:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: INT64_FORMAT in translatable strings" }, { "msg_contents": "On Thu, Apr 22, 2021 at 07:49:23PM +0900, Michael Paquier wrote:\n> \n> May I ask why you are using \"unsigned long long int\" rather uint64?\n\nMy understanding is that it's the project standard. See e.g.\nhttps://www.postgresql.org/message-id/1730584.1617836485@sss.pgh.pa.us\n\n\n", "msg_date": "Thu, 22 Apr 2021 18:56:28 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: INT64_FORMAT in translatable strings" }, { "msg_contents": "On Thu, Apr 22, 2021 at 06:56:28PM +0800, Julien Rouhaud wrote:\n> My understanding is that it's the project standard. See e.g.\n> https://www.postgresql.org/message-id/1730584.1617836485@sss.pgh.pa.us\n\nFWIW, I am not questioning the format of the specifiers, which is\nsomething I heard about, but the casts used on the values passed down\n:)\n--\nMichael", "msg_date": "Thu, 22 Apr 2021 20:12:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: INT64_FORMAT in translatable strings" }, { "msg_contents": "On Thu, Apr 22, 2021 at 08:12:25PM +0900, Michael Paquier wrote:\n> On Thu, Apr 22, 2021 at 06:56:28PM +0800, Julien Rouhaud wrote:\n> > My understanding is that it's the project standard. See e.g.\n> > https://www.postgresql.org/message-id/1730584.1617836485@sss.pgh.pa.us\n> \n> FWIW, I am not questioning the format of the specifiers, which is\n> something I heard about, but the casts used on the values passed down\n> :)\n\nBecause uint64 can be unsigned long int or unsigned long long int depending on\nthe platform?\n\n\n", "msg_date": "Thu, 22 Apr 2021 19:16:04 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: INT64_FORMAT in translatable strings" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Thu, Apr 22, 2021 at 07:49:23PM +0900, Michael Paquier wrote:\n>> May I ask why you are using \"unsigned long long int\" rather uint64?\n\n> My understanding is that it's the project standard. See e.g.\n> https://www.postgresql.org/message-id/1730584.1617836485@sss.pgh.pa.us\n\nIndeed, using %lld, %llu, etc with a matching cast to \"long long\" or\n\"unsigned long long\" is the approved way. Don't use [u]int64 because\nthat does not necessarily match these format specs. It's probably\nphysically compatible, but that won't stop pickier compilers from\nnagging about a format mismatch.\n\nBut what I thought Michael was griping about is the use of \"int\",\nwhich is a noise word here. Either \"long long int\" or \"long long\"\nwill work, but I think we've preferred the latter because shorter.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Apr 2021 09:29:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: INT64_FORMAT in translatable strings" }, { "msg_contents": "At Thu, 22 Apr 2021 09:29:46 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Thu, Apr 22, 2021 at 07:49:23PM +0900, Michael Paquier wrote:\n> >> May I ask why you are using \"unsigned long long int\" rather uint64?\n> \n> > My understanding is that it's the project standard. See e.g.\n> > https://www.postgresql.org/message-id/1730584.1617836485@sss.pgh.pa.us\n> \n> Indeed, using %lld, %llu, etc with a matching cast to \"long long\" or\n> \"unsigned long long\" is the approved way. Don't use [u]int64 because\n> that does not necessarily match these format specs. It's probably\n> physically compatible, but that won't stop pickier compilers from\n> nagging about a format mismatch.\n> \n> But what I thought Michael was griping about is the use of \"int\",\n> which is a noise word here. Either \"long long int\" or \"long long\"\n> will work, but I think we've preferred the latter because shorter.\n\nYeah, there's no reason for the \"int\" other than just following the\nimmediate preceding commit 3286065651. I also prefer the shorter\nnotations. Attached.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 23 Apr 2021 09:43:09 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: INT64_FORMAT in translatable strings" }, { "msg_contents": "On Fri, Apr 23, 2021 at 09:43:09AM +0900, Kyotaro Horiguchi wrote:\n> At Thu, 22 Apr 2021 09:29:46 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n>> But what I thought Michael was griping about is the use of \"int\",\n>> which is a noise word here. Either \"long long int\" or \"long long\"\n>> will work, but I think we've preferred the latter because shorter.\n\nYep, that's what I meant. Sorry for the confusion.\n\n> Yeah, there's no reason for the \"int\" other than just following the\n> immediate preceding commit 3286065651. I also prefer the shorter\n> notations. Attached.\n\nNote that 3286065 only worked on signed integers.\n\n> -\t\t\t\t\t(uint32) (prefetcher->reader->EndRecPtr << 32),\n> -\t\t\t\t\t(uint32) (prefetcher->reader->EndRecPtr),\n> [..]\n> +\t\t\t\t\tLSN_FORMAT_ARGS(prefetcher->reader->EndRecPtr),\n\nGood catch here. LSN_FORMAT_ARGS() exists to prevent such errors.\n\nAnd applied. Thanks!\n--\nMichael", "msg_date": "Fri, 23 Apr 2021 13:26:09 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: INT64_FORMAT in translatable strings" }, { "msg_contents": "At Fri, 23 Apr 2021 13:26:09 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Fri, Apr 23, 2021 at 09:43:09AM +0900, Kyotaro Horiguchi wrote:\n> > At Thu, 22 Apr 2021 09:29:46 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> >> But what I thought Michael was griping about is the use of \"int\",\n> >> which is a noise word here. Either \"long long int\" or \"long long\"\n> >> will work, but I think we've preferred the latter because shorter.\n> \n> Yep, that's what I meant. Sorry for the confusion.\n> \n> > Yeah, there's no reason for the \"int\" other than just following the\n> > immediate preceding commit 3286065651. I also prefer the shorter\n> > notations. Attached.\n> \n> Note that 3286065 only worked on signed integers.\n\nYes. it uses redundant \"int\" for \"long\".\n\n> > -\t\t\t\t\t(uint32) (prefetcher->reader->EndRecPtr << 32),\n> > -\t\t\t\t\t(uint32) (prefetcher->reader->EndRecPtr),\n> > [..]\n> > +\t\t\t\t\tLSN_FORMAT_ARGS(prefetcher->reader->EndRecPtr),\n> \n> Good catch here. LSN_FORMAT_ARGS() exists to prevent such errors.\n> \n> And applied. Thanks!\n\nThanks!\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 23 Apr 2021 14:11:36 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: INT64_FORMAT in translatable strings" } ]
[ { "msg_contents": "Hi,\n\nWe are used to thinking about table vacuum and index vacuum as parts\nof a single, indivisible operation. You vacuum the table -- among\nother things by performing HOT pruning and remembering dead TIDs --\nand then you vacuum the indexes -- removing the remembered TIDs from\nthe index -- and then you vacuum the table some more, setting those\ndead TIDs unused -- and then you're done. And along the way you do\nsome other things too like considering truncation that aren't relevant\nto the point I want to make here. Now, the problem with this is that\nevery index has its own needs, which are separate from the needs of\nthe tables, as I think Peter Geoghegan and Masahiko Sawada were also\ndiscussing recently. Opportunistic index cleanup strategies like\nkill_prior_tuple and bottom-up deletion may work much better for some\nindexes than others, meaning that you could have some indexes that\nbadly need to be vacuumed because they are full of garbage, and other\nindexes on the same table where the opportunistic cleanup has worked\nperfectly and there is no need for vacuuming at all. Separately, the\ntable may or may not need to get some dead pointers set back to unused\nto avoid table bloat.\n\nBut, as things stand today, strategy options to deal with such\nsituations are limited. Leaving aside what the code actually does\nright now, let's talk about what options we have in theory with the\ntechnology as it exists now. They basically all boil down to stopping\nearly and then redoing the work later. We must always start with a\npass over the heap to collect dead TIDs, because otherwise there's\nnothing else to do. Now we can choose to stop, but then the next\nVACUUM will have to collect all those TIDs again. It may get to skip\nmore all-visible pages than the current vacuum did, but the pages that\nstill have dead TIDs will all have to be visited again. If we don't\nstop, then we can choose to vacuum all of the indexes or just some of\nthem, and then afterwards stop. But if we do this, the next VACUUM\nwill have to scan all indexes again for the same TIDs. Here, we don't\neven have the visibility map to allow skipping known-clean pages, so\nit's *really* a lot of work we have to redo. Thus what we normally do\nis press on to the third step, where we mark dead line pointers unused\nafter scanning every index in its entirety, and now they're gone and\nwe don't have to worry about them again. Barring emergency escape\nvalves, as things stand today, the frequency of table vacuuming is the\nsame as the frequency of index vacuuming, even though the *required*\nfrequency of vacuuming is not the same, and also varies from index to\nindex.\n\nNow, the reason for this is that when we discover dead TIDs, we only\nrecord them in memory, not on disk. So, as soon as VACUUM ends, we\nlose all knowledge of whether those TIDs were and must rediscover\nthem. Suppose we didn't do this, and instead had a \"dead TID\" fork for\neach table. Suppose further that this worked like a conveyor belt,\nsimilar to WAL, where every dead TID we store into the fork is\nassigned an identifying 64-bit number that is never reused. Then,\nsuppose that for each index, we store the number of the oldest entry\nthat might still need to be vacuumed from the index. Every time you\nperform what we now call the first heap pass of a VACUUM, you add the\nnew TIDs you find to the dead TID fork. Every time you vacuum an\nindex, the TIDs that need to be removed are those between the\noldest-entry pointer for that index and the current end of the TID\nfork. You remove all of those and then advance your oldest-entry\npointer accordingly. If that's too many TIDs to fit in\nmaintenance_work_mem, you can just read as many as will fit and\nadvance your oldest-entry pointer less far. Every time you perform\nwhat we now call the second heap pass of a VACUUM, you find all the\nTIDs that precede every index's oldest-entry pointer and set them\nunused. You then throw away the associated storage at the OS level.\nThis requires a scheme where relations can be efficiently truncated\nfrom the beginning rather than only at the end, which is why I said \"a\nconveyor belt\" and \"similar to WAL\". Details deliberately vague since\nI am just brainstorming here.\n\nThis scheme adds a lot of complexity, which is a concern, but it seems\nto me that it might have several benefits. One is concurrency. You\ncould have one process gathering dead TIDs and adding them to the\ndead-TID fork while another process is vacuuming previously-gathered\nTIDs from some index. In fact, every index could be getting vacuumed\nat the same time, and different indexes could be removing different\nTID ranges. At the same time, you could have another process setting\ndead TIDs that all indexes have previously removed to unused.\nFurthermore, all of these operations can start in any order, and any\nof them can be repeated any number of times during a single run of any\nparticular other one, or indeed, without that particular one ever\nbeing run at all. Both heap phases can also now be done in smaller\nchunks, if desired. You can gather TIDs from a portion of the table\nand remember where you left off, and come back and pick up from that\npoint later, if you wish. You can likewise pick a subset of\ndead-TIDs-retired-from-all-indexes to set unused, and do just that\nmany, and then at a later time come back and do some more. Also, you\ncan now make mostly-independent decisions about how to perform each of\nthese operations, too. It's not completely independent: if you need to\nset some dead TIDs in the table to unused, you may have to force index\nvacuuming that isn't needed for bloat control. However, you only need\nto force it for indexes that haven't been vacuumed recently enough for\nsome other reason, rather than every index. If you have a target of\nreclaiming 30,000 TIDs, you can just pick the indexes where there are\nfewer than 30,000 dead TIDs behind their oldest-entry pointers and\nforce vacuuming only of those. By the time that's done, there will be\nat least 30,000 dead line pointers you can mark unused, and maybe\nmore, minus whatever reclamation someone else did concurrently.\n\nBut is this worthwhile? I think it depends a lot on what you think the\ncomparative required frequencies are for the various operations. If\nindex A needs to be vacuumed every 40 minutes and index B needs to be\nvacuumed every 55 minutes and the table that owns both of them needs\nto be vacuumed every 70 minutes, I am not sure there is a whole lot\nhere. I think you will be pretty much equally well off if you just do\nwhat we do today every 40 minutes and call it good. Also, you will not\nbenefit very much if the driving force is reclaiming dead line\npointers in the table itself. If that has to happen frequently, then\nthe indexes have to be scanned frequently, and this whole thing is a\nlot of work for not much. But, maybe that's not the case. Suppose\nindex A needs to be vacuumed every hour to avoid bloat, index B needs\nto be vacuumed every 4 hours to avoid bloat, and the table needs dead\nline pointers reclaimed every 5.5 hours. Well, now you can gain a lot.\nYou can vacuum index A frequently while vacuuming index B only as\noften as it needs, and you can reclaim dead line pointers on their own\nschedule based on whatever index vacuuming was already done for bloat\navoidance. Without this scheme, there's just no way to give everybody\nwhat they need without some of the participants being \"dragged along\nfor the ride\" and forced into work that they don't actually need done\nsimply because \"that's how it works.\"\n\nOne thing I don't know is whether the kind of scenario that I describe\nabove is common, i.e. is the main reason we need to vacuum to control\nindex bloat, where this kind of approach seems likely to help, or is\nit to reclaim dead line pointers in the heap, where it's not? I'd be\ninterested in hearing from people who have some experience in this\narea, or at least better intuition than I do.\n\nI'm interested in this idea partly because I think it would be much\nmore likely to help in a hypothetical world where we had global\nindexes. Imagine a partitioned table where each partition has a local\nindex and a then there is also a global index which indexes tuples\nfrom every partition. Waving away the difficulty of making such a\nthing work, there's a vacuuming problem here, which has been discussed\nbefore. In short, if you tried to handle this in the naive way, you'd\nend up having to vacuum the global index every time you vacuumed any\npartition. That sucks. Suppose that there are 1000 partitions, each\npartition is 1GB, and each local index is 50MB. All things being\nequal, the global index should end up being about as big as all of the\nlocal indexes put together, which in this case would be 50GB. Clearly,\nwe do not want to vacuum each partition by scanning the 1GB partition\n+ the 50MB local index + the 50GB global index. That's insane. With\nthe above system, since everything's decoupled, you can vacuum the\npartition tables individually as often as required, and similarly for\ntheir local indexes, but put off vacuuming the global index until\nyou've vacuumed a bunch, maybe all, of the partitions, so that the\nwork of cleaning up the global index cleans up dead TIDs from many or\nall partitions instead of just one at a time.\n\nNow, the fly in the ointment here is that this supposes that we don't\nget forced into vacuuming the global index too quickly because of dead\nline pointer accumulation. But, I think if that does happen, with\ncareful scheduling, we might not really be worse off than we would\nhave been without partitioning. If we scan the table for just one\npartition and, say, exhaust maintenance_work_mem, we have some\nexpensive index vacuuming to do immediately, but that would've also\nhappened in pretty much the same way with an unpartitioned table. If\nwe don't fill maintenance_work_mem but we do notice that the table for\nthis partition is full of dead line pointers that we need to reclaim,\nwe can still choose to scan some other partitions and collect some\nmore dead TIDs before cleaning the global index. That could delay\nactually getting those line pointers reclaimed, but an unpartitioned\ntable would have suffered from at least as much delay, because it\nwouldn't even consider the possibility of stopping before scanning\nevery available table page, and we could choose to stop after dealing\nwith only some partitions but not all. It's probably tricky to get the\nautovacuum algorithm right here, but there seems to be some room for\noptimism.\n\nEven if global indexes never happened, though, I think this could have\nother benefits. For example, the wraparound failsafe mechanism\nrecently added by Masahiko Sawada and Peter Geoghegan bypasses index\nvacuuming when wraparound danger is imminent. The only problem is that\nmaking that decision means throwing away the accumulated list of dead\nTIDs, which then need to be rediscovered whenever we get around to\nvacuuming the indexes. But that's avoidable, if they're stored on disk\nrather than in RAM.\n\nOne rather serious objection to this whole line of attack is that we'd\nideally like VACUUM to reclaim disk space without using any more, in\ncase the motivation for running VACUUM in the first place. A related\nobjection is that if it's sometimes agreable to do everything all at\nonce as we currently do, the I/O overhead could be avoided. I think\nwe'd probably have to retain a code path that buffers the dead TIDs in\nmemory to account, at least, for the low-on-disk-space case, and maybe\nthat can also be used to avoid I/O in some other cases, too. I haven't\nthought through all the details here. It seems to me that the actual\nI/O avoidance is probably not all that much - each dead TID is much\nsmaller than the deleted tuple that gave rise to it, and typically you\ndon't delete all the tuples at once - but it might be material in some\ncases, and it's definitely material if you don't have enough disk\nspace left for it to complete without error.\n\nThoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 21 Apr 2021 11:21:31 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "decoupling table and index vacuum" }, { "msg_contents": "Hi,\n\nOn 2021-04-21 11:21:31 -0400, Robert Haas wrote:\n> Opportunistic index cleanup strategies like\n> kill_prior_tuple and bottom-up deletion may work much better for some\n> indexes than others, meaning that you could have some indexes that\n> badly need to be vacuumed because they are full of garbage, and other\n> indexes on the same table where the opportunistic cleanup has worked\n> perfectly and there is no need for vacuuming at all.\n\nPartial indexes are another case that can lead to individual indexes\nbeing without bloat, with others severely bloated.\n\n\n> This requires a scheme where relations can be efficiently truncated\n> from the beginning rather than only at the end, which is why I said \"a\n> conveyor belt\" and \"similar to WAL\". Details deliberately vague since\n> I am just brainstorming here.\n\nI'm not sure that's the only way to deal with this. While some form of\ngeneric \"conveyor belt\" infrastructure would be a useful building block,\nand it'd be sensible to use it here if it existed, it seems feasible to\ndead tids in a different way here. You could e.g. have per-heap-vacuum\nfiles with a header containing LSNs that indicate the age of the\ncontents.\n\n\n> This scheme adds a lot of complexity, which is a concern, but it seems\n> to me that it might have several benefits. One is concurrency. You\n> could have one process gathering dead TIDs and adding them to the\n> dead-TID fork while another process is vacuuming previously-gathered\n> TIDs from some index.\n\nI think it might even open the door to using multiple processes\ngathering dead TIDs for the same relation.\n\n\n> In fact, every index could be getting vacuumed at the same time, and\n> different indexes could be removing different TID ranges.\n\nWe kind of have this feature right now, due to parallel vacuum...\n\n\n> It's not completely independent: if you need to set some dead TIDs in\n> the table to unused, you may have to force index vacuuming that isn't\n> needed for bloat control. However, you only need to force it for\n> indexes that haven't been vacuumed recently enough for some other\n> reason, rather than every index.\n\nHm - how would we know how recently that TID has been marked dead? We\ndon't even have xids for dead ItemIds... Maybe you're intending to\nanswer that in your next paragraph, but it's not obvious to me that'd be\nsufficient...\n\n> If you have a target of reclaiming 30,000 TIDs, you can just pick the\n> indexes where there are fewer than 30,000 dead TIDs behind their\n> oldest-entry pointers and force vacuuming only of those. By the time\n> that's done, there will be at least 30,000 dead line pointers you can\n> mark unused, and maybe more, minus whatever reclamation someone else\n> did concurrently.\n\n\n\nOne thing that you didn't mention so far is that this'd allow us to add\ndead TIDs to the \"dead tid\" file outside of vacuum too. In some\nworkloads most of the dead tuple removal happens as part of on-access\nHOT pruning. While some indexes are likely to see that via the\nkilltuples logic, others may not. Being able to have more aggressive\nindex vacuum for the one or two bloated index, without needing to rescan\nthe heap, seems like it'd be a significant improvement.\n\n\n> Suppose index A needs to be vacuumed every hour to avoid bloat, index\n> B needs to be vacuumed every 4 hours to avoid bloat, and the table\n> needs dead line pointers reclaimed every 5.5 hours. Well, now you can\n> gain a lot. You can vacuum index A frequently while vacuuming index B\n> only as often as it needs, and you can reclaim dead line pointers on\n> their own schedule based on whatever index vacuuming was already done\n> for bloat avoidance. Without this scheme, there's just no way to give\n> everybody what they need without some of the participants being\n> \"dragged along for the ride\" and forced into work that they don't\n> actually need done simply because \"that's how it works.\"\n\nHave you thought about how we would do the scheduling of vacuums for the\ndifferent indexes? We don't really have useful stats for the number of\ndead index entries to be expected in an index. It'd not be hard to track\nhow many entries are removed in an index via killtuples, but\ne.g. estimating how many dead entries there are in a partial index seems\nquite hard (at least without introducing significant overhead).\n\n\n> One thing I don't know is whether the kind of scenario that I describe\n> above is common, i.e. is the main reason we need to vacuum to control\n> index bloat, where this kind of approach seems likely to help, or is\n> it to reclaim dead line pointers in the heap, where it's not? I'd be\n> interested in hearing from people who have some experience in this\n> area, or at least better intuition than I do.\n\nI think doing something like this has a fair bit of potential. Being\nable to perform freezing independently of index scans, without needing\nto scan the table again to re-discover dead line item pointers seems\nlike it'd be a win. More aggressive/targeted index vacuum in cases where\nmost tuples are removed via HOT pruning seems like a win. Not having to\nrestart from scratch after a cancelled autvacuum would be a\nwin. Additional parallelization seems like a win...\n\n\n> One rather serious objection to this whole line of attack is that we'd\n> ideally like VACUUM to reclaim disk space without using any more, in\n> case the motivation for running VACUUM in the first place.\n\nI suspect we'd need a global limit of space used for this data. If above\nthat limit we'd switch to immediately performing the work required to\nremove some of that space.\n\n\n> A related objection is that if it's sometimes agreable to do\n> everything all at once as we currently do, the I/O overhead could be\n> avoided. I think we'd probably have to retain a code path that buffers\n> the dead TIDs in memory to account, at least, for the\n> low-on-disk-space case, and maybe that can also be used to avoid I/O\n> in some other cases, too.\n\nWe'd likely want to do some batching of insertions into the \"dead tid\"\nmap - which'd probably end up looking similar to a purely in-memory path\nanyway.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 21 Apr 2021 14:38:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Wed, Apr 21, 2021 at 8:21 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> We are used to thinking about table vacuum and index vacuum as parts\n> of a single, indivisible operation. You vacuum the table -- among\n> other things by performing HOT pruning and remembering dead TIDs --\n> and then you vacuum the indexes -- removing the remembered TIDs from\n> the index -- and then you vacuum the table some more, setting those\n> dead TIDs unused -- and then you're done. And along the way you do\n> some other things too like considering truncation that aren't relevant\n> to the point I want to make here. Now, the problem with this is that\n> every index has its own needs, which are separate from the needs of\n> the tables, as I think Peter Geoghegan and Masahiko Sawada were also\n> discussing recently.\n\nI'm very happy to see that you've taken an interest in this work! I\nbelieve it's an important area. It's too important to be left to only\ntwo contributors. I welcome your participation as an equal partner in\nthe broader project to fix problems with VACUUM.\n\nMasahiko and I have had plenty of ideas about where this could go next\n-- way too many ideas, in fact. Maybe that kind of partnership sounds\nunnecessary or at least seems premature, but it turns out that this\narea is extremely broad and far reaching, if you really think it\nthrough -- you end up having to negotiate rather a lot all at once.\nApart from anything else, I simply don't have the authority to commit\na bunch of stuff that implicitly makes Postgres do things a certain\nway in a huge number of different subsystems. (Whether or not I'd be\nright in each case is beside the point.)\n\nMy most ambitious goal is finding a way to remove the need to freeze\nor to set hint bits. I think that we can do this by inventing a new\nkind of VACUUM just for aborted transactions, which doesn't do index\nvacuuming. You'd need something like an ARIES-style dirty page table\nto make this cheap -- so it's a little like UNDO, but not very much.\nThe basic idea is that eagerly cleaning up aborted transactions in an\nautovacuum worker allows you to broadly assume that most blocks\ncontain definitely-committed heap tuples, or else LP_DEAD stubs (which\nof course don't contain any XIDs). You'd still have something like\nconventional VACUUM, which wouldn't change that much. Freezing is\nlargely implicit, but maybe you freeze tuples the old way if and only\nif a backend dirties a \"known-all-commited\" block -- that can still be\nexpensive.\n\nThe visibility map still has an all-visible bit, but now it also has\nan all-committed bit (or maybe it's a separate data structure). The\ncombination of all-visible and all-committed to precisely the same as\nfrozen, so you don't need a separate VM bit for that anymore.\n\nNotice that this design doesn't change much about our basic approach\nto transaction management. It just further decouples things.\nConventional VACUUMs are now only about garbage collection, and so can\nbe further optimized with that goal in mind. It's much easier to do\nclever scheduling if VACUUM really only has to do garbage collection.\n\n> Opportunistic index cleanup strategies like\n> kill_prior_tuple and bottom-up deletion may work much better for some\n> indexes than others, meaning that you could have some indexes that\n> badly need to be vacuumed because they are full of garbage, and other\n> indexes on the same table where the opportunistic cleanup has worked\n> perfectly and there is no need for vacuuming at all.\n\nI know I say this all the time these days, but it seems worth\nrepeating now: it is a qualitative difference, not a quantitative\ndifference. Bottom-up index deletion will frequently stop most indexes\non a table from growing by even one single block, while indexes that\ncannot use the optimization (indexes that are logically modified by\nUPDATE statements) might be hugely bloated. If this is the case during\none VACUUM operation, it's probably going to work like that with all\nfuture VACUUM operations. It's abundantly clear that the current\nquantitative approach just cannot be pushed much further.\n\n> But, as things stand today, strategy options to deal with such\n> situations are limited. Leaving aside what the code actually does\n> right now, let's talk about what options we have in theory with the\n> technology as it exists now. They basically all boil down to stopping\n> early and then redoing the work later. We must always start with a\n> pass over the heap to collect dead TIDs, because otherwise there's\n> nothing else to do. Now we can choose to stop, but then the next\n> VACUUM will have to collect all those TIDs again. It may get to skip\n> more all-visible pages than the current vacuum did, but the pages that\n> still have dead TIDs will all have to be visited again. If we don't\n> stop, then we can choose to vacuum all of the indexes or just some of\n> them, and then afterwards stop. But if we do this, the next VACUUM\n> will have to scan all indexes again for the same TIDs. Here, we don't\n> even have the visibility map to allow skipping known-clean pages, so\n> it's *really* a lot of work we have to redo. Thus what we normally do\n> is press on to the third step, where we mark dead line pointers unused\n> after scanning every index in its entirety, and now they're gone and\n> we don't have to worry about them again. Barring emergency escape\n> valves, as things stand today, the frequency of table vacuuming is the\n> same as the frequency of index vacuuming, even though the *required*\n> frequency of vacuuming is not the same, and also varies from index to\n> index.\n\nI'm 100% in agreement about all of this.\n\n> Now, the reason for this is that when we discover dead TIDs, we only\n> record them in memory, not on disk. So, as soon as VACUUM ends, we\n> lose all knowledge of whether those TIDs were and must rediscover\n> them. Suppose we didn't do this, and instead had a \"dead TID\" fork for\n> each table.\n\nI had a similar idea myself recently -- clearly remembering the TIDs\nthat you haven't vacuumed to save work later on makes a lot of sense.\nI didn't get very far with it, even in my own head, but I like the\ndirection you're taking it. Having it work a little like a queue makes\na lot of sense.\n\n> Suppose further that this worked like a conveyor belt,\n> similar to WAL, where every dead TID we store into the fork is\n> assigned an identifying 64-bit number that is never reused. Then,\n> suppose that for each index, we store the number of the oldest entry\n> that might still need to be vacuumed from the index. Every time you\n> perform what we now call the first heap pass of a VACUUM, you add the\n> new TIDs you find to the dead TID fork.\n\nMaybe we can combine this known-dead-tid structure with the visibility\nmap. Index-only scans might be able to reason about blocks that are\nmostly all-visible, but still have stub LP_DEAD line pointers that\nthis other structure knows about -- so you can get index-only scans\nwithout requiring a full round of traditional vacuuming. Maybe there\nis some opportunity like that, but not sure how to fit it in to\neverything else right now.\n\n> Every time you vacuum an\n> index, the TIDs that need to be removed are those between the\n> oldest-entry pointer for that index and the current end of the TID\n> fork. You remove all of those and then advance your oldest-entry\n> pointer accordingly. If that's too many TIDs to fit in\n> maintenance_work_mem, you can just read as many as will fit and\n> advance your oldest-entry pointer less far. Every time you perform\n> what we now call the second heap pass of a VACUUM, you find all the\n> TIDs that precede every index's oldest-entry pointer and set them\n> unused. You then throw away the associated storage at the OS level.\n> This requires a scheme where relations can be efficiently truncated\n> from the beginning rather than only at the end, which is why I said \"a\n> conveyor belt\" and \"similar to WAL\". Details deliberately vague since\n> I am just brainstorming here.\n\nThis amounts to adding yet more decoupling -- which seems great to me.\nAnything that gives us the option but not the obligation to perform\nwork either more lazily or more eagerly (whichever makes sense) seems\nhighly desirable to me. Especially if we can delay our decision until\nthe last possible point, when we can have relatively high confidence\nthat we know what we're doing. And especially if holding it open as an\noption is pretty cheap (that's the point of remembering dead TIDs).\n\n> Furthermore, all of these operations can start in any order, and any\n> of them can be repeated any number of times during a single run of any\n> particular other one, or indeed, without that particular one ever\n> being run at all. Both heap phases can also now be done in smaller\n> chunks, if desired.\n\n> But is this worthwhile? I think it depends a lot on what you think the\n> comparative required frequencies are for the various operations.\n\nThere is a risk that you'll never think that any optimization is worth\nit because each optimization seems marginal in isolation. Sometimes a\ndiversity of strategies is the real strategy. Let's say you have a\nbunch of options that you can pick and choose from, with fallbacks and\nwith ways of course correcting even halfway through the VACUUM. It's\npossible that that will work wonderfully well for a given complex user\nworkload, but if you subtract away *any one* of the strategies\nsuddenly things get much worse in some obvious high-level way. It's\nentirely possible for a single table to have different needs in\ndifferent parts of the table.\n\nCertainly works that way with indexes -- that much I can say for sure.\n\n> If index A needs to be vacuumed every 40 minutes and index B needs to be\n> vacuumed every 55 minutes and the table that owns both of them needs\n> to be vacuumed every 70 minutes, I am not sure there is a whole lot\n> here. I think you will be pretty much equally well off if you just do\n> what we do today every 40 minutes and call it good.\n\nThat's probably all true, but I think that an excellent heuristic is\nto work hard to avoid really terrible outcomes, rather than trying\nhard to get good outcomes. The extremes don't just matter -- they may\neven be the only thing that matters.\n\nIf index A needs to be vacuumed about as frequently as index B anyway,\nthen the user happens to naturally be in a position where the current\nsimplistic scheduling works for them. Which is fine, as far as it\ngoes, but that's not really where we have problems. If you consider\nthe \"qualitative, not quantitative\" perspective, things change. It's\nnow pretty unlikely that all of the indexes on the same table will\nhave approximately the same needs -- except when there is very little\nto do with indexes anyway, which is pretty much not interesting\nanyway. Because workloads generally don't logically modify all indexes\ncolumns within each UPDATE. They just don't tend to look like that in\npractice.\n\n> Also, you will not\n> benefit very much if the driving force is reclaiming dead line\n> pointers in the table itself. If that has to happen frequently, then\n> the indexes have to be scanned frequently, and this whole thing is a\n> lot of work for not much. But, maybe that's not the case. Suppose\n> index A needs to be vacuumed every hour to avoid bloat, index B needs\n> to be vacuumed every 4 hours to avoid bloat, and the table needs dead\n> line pointers reclaimed every 5.5 hours. Well, now you can gain a lot.\n> You can vacuum index A frequently while vacuuming index B only as\n> often as it needs, and you can reclaim dead line pointers on their own\n> schedule based on whatever index vacuuming was already done for bloat\n> avoidance. Without this scheme, there's just no way to give everybody\n> what they need without some of the participants being \"dragged along\n> for the ride\" and forced into work that they don't actually need done\n> simply because \"that's how it works.\"\n\nRight. And, the differences between index A and index B will tend to\nbe pretty consistent and often much larger than this.\n\nMany indexes would never have to be vacuumed, even with non-HOT\nUPDATES due to bottom-up index deletion -- because they literally\nwon't even have one single page split for hours, while maybe one index\ngets 3x larger in the same timeframe. Eventually you'll need to vacuum\nthe indexes all the same (not just the bloated index), but that's only\nrequired to enable safely performing heap vacuuming. It's not so bad\nif the non-bloated indexes won't be dirtied and if it's not so\nfrequent (dirtying pages is the real cost to keep under control here).\n\n> One thing I don't know is whether the kind of scenario that I describe\n> above is common, i.e. is the main reason we need to vacuum to control\n> index bloat, where this kind of approach seems likely to help, or is\n> it to reclaim dead line pointers in the heap, where it's not? I'd be\n> interested in hearing from people who have some experience in this\n> area, or at least better intuition than I do.\n\nThe paradox here is:\n\n1. Workload characteristics are important and must be exploited to get\noptimal performance.\n\n2. Workloads are too complicated and unpredictable to ever truly understand.\n\nRoughly speaking, the solution that I think has the most promise is to\nmake it okay for your heuristics to be wrong. You do this by keeping\nthe costs simple, fixed and low, and by doing things that have\nmultiple benefits (not just one). This is why it's important to give\nVACUUM a bunch of strategies that it can choose from and switch back\nand forth from, with minimal commitment -- you let VACUUM figure out\nwhat to do about the workload through trial and error. It has to try\nand fail on occasion -- you must be willing to pay the cost of\nnegative feedback (though the cost must be carefully managed). This\napproach is perhaps sufficient to cover all of the possible extremes\nwith all workloads. I think that the extremes are where our problems\nall are, or close to it.\n\nThe cost shouldn't be terribly noticeable because you have the\nflexibility to change your mind at the first sign of an issue. So you\nnever pay an extreme cost (you pay a pretty low fixed cost\nincrementally, at worst), but you do sometimes (and maybe even often)\nget an extreme benefit -- the benefit of avoiding current pathological\nperformance issues. We know that the cost of bloat is very non-linear\nin a bunch of ways that can be pretty well understood, so that seems\nlike the right thing to focus on -- this is perhaps the only thing\nthat we can expect to understand with a relatively high degree of\nconfidence. We can live with a lot of uncertainty about what's going\non with the workload by managing it continually, ramping up and down,\netc.\n\n> Clearly,\n> we do not want to vacuum each partition by scanning the 1GB partition\n> + the 50MB local index + the 50GB global index. That's insane. With\n> the above system, since everything's decoupled, you can vacuum the\n> partition tables individually as often as required, and similarly for\n> their local indexes, but put off vacuuming the global index until\n> you've vacuumed a bunch, maybe all, of the partitions, so that the\n> work of cleaning up the global index cleans up dead TIDs from many or\n> all partitions instead of just one at a time.\n\nI can't think of any other way of sensibly implementing global indexes.\n\n> Now, the fly in the ointment here is that this supposes that we don't\n> get forced into vacuuming the global index too quickly because of dead\n> line pointer accumulation. But, I think if that does happen, with\n> careful scheduling, we might not really be worse off than we would\n> have been without partitioning. If we scan the table for just one\n> partition and, say, exhaust maintenance_work_mem, we have some\n> expensive index vacuuming to do immediately, but that would've also\n> happened in pretty much the same way with an unpartitioned table.\n\nBut you can at least drop the partitions with a global index. It\nshouldn't be too hard to make that work without breaking things.\n\n> It's probably tricky to get the\n> autovacuum algorithm right here, but there seems to be some room for\n> optimism.\n\nI think that it's basically okay if global indexes suck when you do\nlots of UPDATEs -- this is a limitation that users can probably live\nwith. What's not okay is if they suck when you do relatively few\nUPDATEs. It's especially not okay to have to scan the global index to\ndelete one index tuple that points to one LP_DEAD item. Since you tend\nto get a tiny number of LP_DEAD items even when the DBA bends over\nbackwards to make all UPDATEs use HOT. Getting that to happen 99%+ of\nthe time is so much easier than getting it to happen 100% of the time.\nThere can be enormous asymmetry with this stuff.\n\nLong term, I see VACUUM evolving into something that can only be\nconfigured in an advisory way. It's too hard to tune this stuff\nbecause what we really want here is to structure many things as an\noptimization problem, and to have a holistic view that considers how\nthe table changes over time -- over multiple VACUUM operations. We can\nsafely be very lazy if we have some basic sense of proportion about\nwhat the risk is. For example, maybe we limit the number of newly\ndirtied pages during VACUUM by being lazy about pruning pages that\ndon't happen to be dirty when encountered within VACUUM. We still have\nsome sense of how much work we've put off, so as to never get in over\nour head with debt. We might also have a sense of how many dirty pages\nin total there are in the system currently -- maybe if the DB is not\nbusy right now we can be extra aggressive. In short, we apply holistic\nthinking.\n\n> Even if global indexes never happened, though, I think this could have\n> other benefits. For example, the wraparound failsafe mechanism\n> recently added by Masahiko Sawada and Peter Geoghegan bypasses index\n> vacuuming when wraparound danger is imminent. The only problem is that\n> making that decision means throwing away the accumulated list of dead\n> TIDs, which then need to be rediscovered whenever we get around to\n> vacuuming the indexes. But that's avoidable, if they're stored on disk\n> rather than in RAM.\n\nYeah, that's not ideal.\n\n> One rather serious objection to this whole line of attack is that we'd\n> ideally like VACUUM to reclaim disk space without using any more, in\n> case the motivation for running VACUUM in the first place. A related\n> objection is that if it's sometimes agreable to do everything all at\n> once as we currently do, the I/O overhead could be avoided.\n\nOf course it's possible that what we currently do will be optimal. But\nit's pretty much a question of mostly-independent things all going the\nsame way. So I expect that it will be rare.\n\n> I think\n> we'd probably have to retain a code path that buffers the dead TIDs in\n> memory to account, at least, for the low-on-disk-space case, and maybe\n> that can also be used to avoid I/O in some other cases, too. I haven't\n> thought through all the details here. It seems to me that the actual\n> I/O avoidance is probably not all that much - each dead TID is much\n> smaller than the deleted tuple that gave rise to it, and typically you\n> don't delete all the tuples at once - but it might be material in some\n> cases, and it's definitely material if you don't have enough disk\n> space left for it to complete without error.\n\nAll true.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 21 Apr 2021 16:51:35 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Wed, Apr 21, 2021 at 8:51 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> Now, the reason for this is that when we discover dead TIDs, we only\n> record them in memory, not on disk. So, as soon as VACUUM ends, we\n> lose all knowledge of whether those TIDs were and must rediscover\n> them. Suppose we didn't do this, and instead had a \"dead TID\" fork for\n> each table.\n\nInteresting idea.\n\nHowever, you only need\n> to force it for indexes that haven't been vacuumed recently enough for\n> some other reason, rather than every index. If you have a target of\n> reclaiming 30,000 TIDs, you can just pick the indexes where there are\n> fewer than 30,000 dead TIDs behind their oldest-entry pointers and\n> force vacuuming only of those.\n\nHow do we decide this target, I mean at a given point how do we decide\nthat what is the limit of dead TID's at which we want to trigger the\nindex vacuuming?\n\n> One rather serious objection to this whole line of attack is that we'd\n> ideally like VACUUM to reclaim disk space without using any more, in\n> case the motivation for running VACUUM in the first place. A related\n> objection is that if it's sometimes agreable to do everything all at\n> once as we currently do, the I/O overhead could be avoided. I think\n> we'd probably have to retain a code path that buffers the dead TIDs in\n> memory to account, at least, for the low-on-disk-space case, and maybe\n> that can also be used to avoid I/O in some other cases, too. I haven't\n> thought through all the details here. It seems to me that the actual\n> I/O avoidance is probably not all that much - each dead TID is much\n> smaller than the deleted tuple that gave rise to it, and typically you\n> don't delete all the tuples at once - but it might be material in some\n> cases, and it's definitely material if you don't have enough disk\n> space left for it to complete without error.\n\nIs it a good idea to always perform an I/O after collecting the dead\nTID's or there should be an option where the user can configure so\nthat it aggressively vacuum all the indexes and this I/O overhead can\nbe avoided completely.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 22 Apr 2021 17:20:48 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Thu, Apr 22, 2021 at 8:51 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Apr 21, 2021 at 8:21 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > We are used to thinking about table vacuum and index vacuum as parts\n> > of a single, indivisible operation. You vacuum the table -- among\n> > other things by performing HOT pruning and remembering dead TIDs --\n> > and then you vacuum the indexes -- removing the remembered TIDs from\n> > the index -- and then you vacuum the table some more, setting those\n> > dead TIDs unused -- and then you're done. And along the way you do\n> > some other things too like considering truncation that aren't relevant\n> > to the point I want to make here. Now, the problem with this is that\n> > every index has its own needs, which are separate from the needs of\n> > the tables, as I think Peter Geoghegan and Masahiko Sawada were also\n> > discussing recently.\n>\n> I'm very happy to see that you've taken an interest in this work! I\n> believe it's an important area. It's too important to be left to only\n> two contributors. I welcome your participation as an equal partner in\n> the broader project to fix problems with VACUUM.\n\n+many\n\n>\n> > Now, the reason for this is that when we discover dead TIDs, we only\n> > record them in memory, not on disk. So, as soon as VACUUM ends, we\n> > lose all knowledge of whether those TIDs were and must rediscover\n> > them. Suppose we didn't do this, and instead had a \"dead TID\" fork for\n> > each table.\n>\n> I had a similar idea myself recently -- clearly remembering the TIDs\n> that you haven't vacuumed to save work later on makes a lot of sense.\n> I didn't get very far with it, even in my own head, but I like the\n> direction you're taking it. Having it work a little like a queue makes\n> a lot of sense.\n\nAgreed. (I now remembered I gave a talk about a similar idea at PGCon\na couple years ago).\n\nAnother good point of this \"dead TID fork\" design is that IIUC we\ndon't necessarily need to make it crash-safe. We would not need WAL\nlogging for remembering dead TIDs. If the server crashes, we can\nsimply throw it away and assume we haven't done the first heap pass\nyet.\n\n>\n> > Suppose further that this worked like a conveyor belt,\n> > similar to WAL, where every dead TID we store into the fork is\n> > assigned an identifying 64-bit number that is never reused. Then,\n> > suppose that for each index, we store the number of the oldest entry\n> > that might still need to be vacuumed from the index. Every time you\n> > perform what we now call the first heap pass of a VACUUM, you add the\n> > new TIDs you find to the dead TID fork.\n>\n> Maybe we can combine this known-dead-tid structure with the visibility\n> map. Index-only scans might be able to reason about blocks that are\n> mostly all-visible, but still have stub LP_DEAD line pointers that\n> this other structure knows about -- so you can get index-only scans\n> without requiring a full round of traditional vacuuming. Maybe there\n> is some opportunity like that, but not sure how to fit it in to\n> everything else right now.\n\nInteresting idea.\n\n>\n> > Every time you vacuum an\n> > index, the TIDs that need to be removed are those between the\n> > oldest-entry pointer for that index and the current end of the TID\n> > fork. You remove all of those and then advance your oldest-entry\n> > pointer accordingly. If that's too many TIDs to fit in\n> > maintenance_work_mem, you can just read as many as will fit and\n> > advance your oldest-entry pointer less far. Every time you perform\n> > what we now call the second heap pass of a VACUUM, you find all the\n> > TIDs that precede every index's oldest-entry pointer and set them\n> > unused. You then throw away the associated storage at the OS level.\n> > This requires a scheme where relations can be efficiently truncated\n> > from the beginning rather than only at the end, which is why I said \"a\n> > conveyor belt\" and \"similar to WAL\". Details deliberately vague since\n> > I am just brainstorming here.\n\nThe dead TID fork needs to also be efficiently searched. If the heap\nscan runs twice, the collected dead TIDs on each heap pass could be\noverlapped. But we would not be able to merge them if we did index\nvacuuming on one of indexes at between those two heap scans. The\nsecond time heap scan would need to record only TIDs that are not\ncollected by the first time heap scan.\n\n>\n> > Clearly,\n> > we do not want to vacuum each partition by scanning the 1GB partition\n> > + the 50MB local index + the 50GB global index. That's insane. With\n> > the above system, since everything's decoupled, you can vacuum the\n> > partition tables individually as often as required, and similarly for\n> > their local indexes, but put off vacuuming the global index until\n> > you've vacuumed a bunch, maybe all, of the partitions, so that the\n> > work of cleaning up the global index cleans up dead TIDs from many or\n> > all partitions instead of just one at a time.\n>\n> I can't think of any other way of sensibly implementing global indexes.\n\nGiven that we could load all dead TIDs from many or all partitions,\nhaving dead TIDs on the memory with an efficient format would also\nbecome important.\n\n> > It's probably tricky to get the\n> > autovacuum algorithm right here, but there seems to be some room for\n> > optimism.\n>\n> I think that it's basically okay if global indexes suck when you do\n> lots of UPDATEs -- this is a limitation that users can probably live\n> with. What's not okay is if they suck when you do relatively few\n> UPDATEs. It's especially not okay to have to scan the global index to\n> delete one index tuple that points to one LP_DEAD item.\n\nRight. Given decoupling index vacuuming, I think the index’s garbage\nstatistics are important which preferably need to be fetchable without\naccessing indexes. It would be not hard to estimate how many index\ntuples might be able to be deleted by looking at the dead TID fork but\nit doesn’t necessarily match the actual number.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 22 Apr 2021 23:27:34 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Wed, Apr 21, 2021 at 5:38 PM Andres Freund <andres@anarazel.de> wrote:\n> I'm not sure that's the only way to deal with this. While some form of\n> generic \"conveyor belt\" infrastructure would be a useful building block,\n> and it'd be sensible to use it here if it existed, it seems feasible to\n> dead tids in a different way here. You could e.g. have per-heap-vacuum\n> files with a header containing LSNs that indicate the age of the\n> contents.\n\nThat's true, but have some reservations about being overly reliant on\nthe filesystem to provide structure here. There are good reasons to be\nworried about bloating the number of files in the data directory. Hmm,\nbut maybe we could mitigate that. First, we could skip this for small\nrelations. If you can vacuum the table and all of its indexes using\nthe naive algorithm in <10 seconds, you probably shouldn't do anything\nfancy. That would *greatly* reduce the number of additional files\ngenerated. Second, we could forget about treating them as separate\nrelation forks and make them some other kind of thing entirely, in a\nseparate directory, especially if we adopted Sawada-san's proposal to\nskip WAL logging. I don't know if that proposal is actually a good\nidea, because it effectively adds a performance penalty when you crash\nor fail over, and that sort of thing can be an unpleasant surprise.\nBut it's something to think about.\n\n> > This scheme adds a lot of complexity, which is a concern, but it seems\n> > It's not completely independent: if you need to set some dead TIDs in\n> > the table to unused, you may have to force index vacuuming that isn't\n> > needed for bloat control. However, you only need to force it for\n> > indexes that haven't been vacuumed recently enough for some other\n> > reason, rather than every index.\n>\n> Hm - how would we know how recently that TID has been marked dead? We\n> don't even have xids for dead ItemIds... Maybe you're intending to\n> answer that in your next paragraph, but it's not obvious to me that'd be\n> sufficient...\n\nYou wouldn't know anything about when things were added in terms of\nwall clock time, but the idea was that TIDs get added in order and\nstay in that order. So you know which ones were added first. Imagine a\nconceptually infinite array of TIDs:\n\n(17,5) (332,6) (5, 1) (2153,92) ....\n\nEach index keeps a pointer into this array. Initially it points to the\nstart of the array, here (17,5). If an index vacuum starts after\n(17,5) and (332,6) have been added to the array but before (5,1) is\nadded, then upon completion it updates its pointer to point to (5,1).\nIf every index is pointing to (5,1) or some later element, then you\nknow that (17,5) and (332,6) can be set LP_UNUSED. If not, and you\nwant to get to a state where you CAN set (17,5) and (332,6) to\nLP_UNUSED, you just need to force index vac on indexes that are\npointing to something prior to (5,1) -- and keep forcing it until\nthose pointers reach (5,1) or later.\n\n> One thing that you didn't mention so far is that this'd allow us to add\n> dead TIDs to the \"dead tid\" file outside of vacuum too. In some\n> workloads most of the dead tuple removal happens as part of on-access\n> HOT pruning. While some indexes are likely to see that via the\n> killtuples logic, others may not. Being able to have more aggressive\n> index vacuum for the one or two bloated index, without needing to rescan\n> the heap, seems like it'd be a significant improvement.\n\nOh, that's a very interesting idea. It does impose some additional\nrequirements on any such system, though, because it means you have to\nbe able to efficiently add single TIDs. For example, you mention a\nper-heap-VACUUM file above, but you can't get away with creating a new\nfile per HOT prune no matter how you arrange things at the FS level.\nActually, though, I think the big problem here is deduplication. A\nfull-blown VACUUM can perhaps read all the already-known-to-be-dead\nTIDs into some kind of data structure and avoid re-adding them, but\nthat's impractical for a HOT prune.\n\n> Have you thought about how we would do the scheduling of vacuums for the\n> different indexes? We don't really have useful stats for the number of\n> dead index entries to be expected in an index. It'd not be hard to track\n> how many entries are removed in an index via killtuples, but\n> e.g. estimating how many dead entries there are in a partial index seems\n> quite hard (at least without introducing significant overhead).\n\nNo, I don't have any good ideas about that, really. Partial indexes\nseem like a hard problem, and so do GIN indexes or other kinds of\nthings where you may have multiple index entries per heap tuple. We\nmight have to accept some known-to-be-wrong approximations in such\ncases.\n\n> > One rather serious objection to this whole line of attack is that we'd\n> > ideally like VACUUM to reclaim disk space without using any more, in\n> > case the motivation for running VACUUM in the first place.\n>\n> I suspect we'd need a global limit of space used for this data. If above\n> that limit we'd switch to immediately performing the work required to\n> remove some of that space.\n\nI think that's entirely the wrong approach. On the one hand, it\ndoesn't prevent you from running out of disk space during emergency\nmaintenance, because the disk overall can be full even though you're\nbelow your quota of space for this particular purpose. On the other\nhand, it does subject you to random breakage when your database gets\nbig enough that the critical information can't be stored within the\nconfigured quota. I think we'd end up with pathological cases very\nmuch like what used to happen with the fixed-size free space map. What\nhappened there was that your database got big enough that you couldn't\ntrack all the free space any more and it just started bloating out the\nwazoo. What would happen here is that you'd silently lose the\nwell-optimized version of VACUUM when your database gets too big. That\ndoes not seem like something anybody wants.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 22 Apr 2021 12:15:27 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Wed, Apr 21, 2021 at 7:51 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I'm very happy to see that you've taken an interest in this work! I\n> believe it's an important area. It's too important to be left to only\n> two contributors. I welcome your participation as an equal partner in\n> the broader project to fix problems with VACUUM.\n\nErr, thanks. I agree this needs broad discussion and participation.\n\n> My most ambitious goal is finding a way to remove the need to freeze\n> or to set hint bits. I think that we can do this by inventing a new\n> kind of VACUUM just for aborted transactions, which doesn't do index\n> vacuuming. You'd need something like an ARIES-style dirty page table\n> to make this cheap -- so it's a little like UNDO, but not very much.\n\nI don't see how that works. An aborted transaction can have made index\nentries, and those index entries can have already been moved by page\nsplits, and there can be arbitrarily many of them, so that you can't\nkeep track of them all in RAM. Also, you can crash after making the\nindex entries and writing them to the disk and before the abort\nhappens. Anyway, this is probably a topic for a separate thread.\n\n> I know I say this all the time these days, but it seems worth\n> repeating now: it is a qualitative difference, not a quantitative\n> difference.\n\nFor the record, I find your quantitative vs. qualitative distinction\nto be mostly unhelpful in understanding what's actually going on here.\nI've backed into it by reading the explanatory statements you've made\nat various times (including here, in the part I didn't quote). But\nthat phrase in and of itself means very little to me. Other people's\nmileage may vary, of course; I'm just telling you how I feel about it.\n\n> Right. And, the differences between index A and index B will tend to\n> be pretty consistent and often much larger than this.\n>\n> Many indexes would never have to be vacuumed, even with non-HOT\n> UPDATES due to bottom-up index deletion -- because they literally\n> won't even have one single page split for hours, while maybe one index\n> gets 3x larger in the same timeframe. Eventually you'll need to vacuum\n> the indexes all the same (not just the bloated index), but that's only\n> required to enable safely performing heap vacuuming. It's not so bad\n> if the non-bloated indexes won't be dirtied and if it's not so\n> frequent (dirtying pages is the real cost to keep under control here).\n\nInteresting.\n\n> The cost shouldn't be terribly noticeable because you have the\n> flexibility to change your mind at the first sign of an issue. So you\n> never pay an extreme cost (you pay a pretty low fixed cost\n> incrementally, at worst), but you do sometimes (and maybe even often)\n> get an extreme benefit -- the benefit of avoiding current pathological\n> performance issues. We know that the cost of bloat is very non-linear\n> in a bunch of ways that can be pretty well understood, so that seems\n> like the right thing to focus on -- this is perhaps the only thing\n> that we can expect to understand with a relatively high degree of\n> confidence. We can live with a lot of uncertainty about what's going\n> on with the workload by managing it continually, ramping up and down,\n> etc.\n\nI generally agree. You want to design a system in a way that's going\nto do a good job avoiding pathological cases. The current system is\nkind of naive about that. It does things that work well in\nmiddle-of-the-road cases, but often does stupid things in extreme\ncases. There are numerous examples of that; one is the \"useless\nvacuuming\" problem about which I've blogged in\nhttp://rhaas.blogspot.com/2020/02/useless-vacuuming.html where the\nsystem keeps on vacuuming because relfrozenxid is old but doesn't\nactually succeed in advancing it, so that it's just spinning to no\npurpose. Another thing is when it picks the \"wrong\" thing to do first,\nfocusing on a less urgent problem rather than a more urgent one. This\ncan go either way: we might spend a lot of energy cleaning up bloat\nwhen a wraparound shutdown is imminent, but we also might spend a lot\nof energy dealing with a wraparound issue that's not yet urgent while\nsome table bloats out of control. I think it's important not to let\nthe present discussion get overbroad; we won't be able to solve\neverything at once, and trying to do too many things at the same time\nwill likely result in instability.\n\n> > Clearly,\n> > we do not want to vacuum each partition by scanning the 1GB partition\n> > + the 50MB local index + the 50GB global index. That's insane. With\n> > the above system, since everything's decoupled, you can vacuum the\n> > partition tables individually as often as required, and similarly for\n> > their local indexes, but put off vacuuming the global index until\n> > you've vacuumed a bunch, maybe all, of the partitions, so that the\n> > work of cleaning up the global index cleans up dead TIDs from many or\n> > all partitions instead of just one at a time.\n>\n> I can't think of any other way of sensibly implementing global indexes.\n\nAwesome.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 22 Apr 2021 14:16:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Thu, Apr 22, 2021 at 9:15 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Have you thought about how we would do the scheduling of vacuums for the\n> > different indexes? We don't really have useful stats for the number of\n> > dead index entries to be expected in an index. It'd not be hard to track\n> > how many entries are removed in an index via killtuples, but\n> > e.g. estimating how many dead entries there are in a partial index seems\n> > quite hard (at least without introducing significant overhead).\n>\n> No, I don't have any good ideas about that, really. Partial indexes\n> seem like a hard problem, and so do GIN indexes or other kinds of\n> things where you may have multiple index entries per heap tuple. We\n> might have to accept some known-to-be-wrong approximations in such\n> cases.\n\nI think that you're both missing very important subtleties here.\nApparently the \"quantitative vs qualitative\" distinction I like to\nmake hasn't cleared it up.\n\nYou both seem to be assuming that everything would be fine if you\ncould somehow inexpensively know the total number of undeleted dead\ntuples in each index at all times. But I don't think that that's true\nat all. I don't mean that it might not be true. What I mean is that\nit's usually a meaningless number *on its own*, at least if you assume\nthat every index is either an nbtree index (or an index that uses some\nother index AM that has the same index deletion capabilities).\n\nMy mental models for index bloat usually involve imagining an\nidealized version of a real world bloated index -- I compare the\nempirical reality against an imagined idealized version. I then try to\nfind optimizations that make the reality approximate the idealized\nversion. Say a version of the same index in a traditional 2PL database\nwithout MVCC, or in real world Postgres with VACUUM that magically\nruns infinitely fast.\n\nBottom-up index deletion usually leaves a huge number of\nundeleted-though-dead index tuples untouched for hours, even when it\nworks perfectly. 10% - 30% of the index tuples might be\nundeleted-though-dead at any given point in time (traditional B-Tree\nspace utilization math generally ensures that there is about that much\nfree space on each leaf page if we imagine no version churn/bloat --\nwe *naturally* have a lot of free space to work with). These are\n\"Schrodinger's dead index tuples\". You could count them\nmechanistically, but then you'd be counting index tuples that are\n\"already dead and deleted\" in an important theoretical sense, despite\nthe fact that they are not yet literally deleted. That's why bottom-up\nindex deletion frequently avoids 100% of all unnecessary page splits.\nThe asymmetry that was there all along was just crazy. I merely had\nthe realization that it was there and could be exploited -- I didn't\ncreate or invent the natural asymmetry.\n\nA similar issue exists within vacuumlazy.c (though that might matter a\nlot less). Notice that it counts a recently dead heap tuple in its\nnew_dead_tuples accounting, even though the workload can probably\ndelete that tuple in a just-in-time fashion opportunistically. Might\nwe be better off recognizing that such a heap tuple is already morally\ndead and gone, even if that isn't literally true? (That's a harder\nargument to make, and I'm not making it right now -- it's just an\nexample.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 22 Apr 2021 11:30:21 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Thu, Apr 22, 2021 at 7:51 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> How do we decide this target, I mean at a given point how do we decide\n> that what is the limit of dead TID's at which we want to trigger the\n> index vacuuming?\n\nIt's tricky. Essentially, it's a cost-benefit analysis. On the \"cost\"\nside, the expense associated with an index vacuum is basically the\nnumber of pages that we're going to visit, and the number of those\nthat we're going to dirty. We can know the former with certainty but\ncan only estimate the latter. On the \"benefit\" side, setting dead TIDs\nunused helps us in two ways. First, it lets us mark heap pages\nall-visible, which makes index-only scans work better and reduces the\ncost of future vacuuming. These benefits are mitigated by future DML\nunsetting those bits again; there's no point in marking a page\nall-visible if it's about to be modified again. Second, it avoids line\npointer bloat. Dead line pointers still take up space on the page, and\npotentially force the line pointer array to be extended, which can\neventually cause tuples that would have fit into the page to spill\ninto a different page, possibly a newly-allocated one that forces a\ntable extension.\n\nIt's hard to compare the costs to the benefits because we don't know\nthe frequency of access. Suppose it costs $1000 to vacuum relevant\nindexes and set dead line pointers unused. And suppose that if you do\nit, you thereafter will save $1 every time someone does an index-only\nscan. If there will be >1000 index-only scans in a meaningful time\nframe, it's a good trade-off, but if not, it's a bad one, but it's\ndifficult to predict the future, and we have limited information even\nabout the past.\n\nMy intuition is that two things that we want to consider are the total\nnumber of dead line pointers in the heap, and the number of pages\nacross which they are spread. It is also my intuition that the latter\nis the more important number, possibly to the extent that we could\nignore the former number completely. But exactly what the thresholds\nshould be is very unclear to me.\n\n> Is it a good idea to always perform an I/O after collecting the dead\n> TID's or there should be an option where the user can configure so\n> that it aggressively vacuum all the indexes and this I/O overhead can\n> be avoided completely.\n\nIt's my view that there should definitely be such an option.\n\nAs I also mentioned on another thread recently, suppose we pick words\nfor each phase of vacuum. For the sake of argument, suppose we refer\nto the first heap phase as PRUNE, the index phase as SANITIZE, and the\nsecond heap phase as RECYCLE. Then you can imagine syntax like this:\n\nVACUUM (PRUNE) my_table;\nVACUUM (SANITIZE) my_table; -- all indexes\nVACUUM my_index; -- must be sanitize only\nVACUUM (PRUNE, SANITIZE, RECYCLE) my_table; -- do everything\n\nNow in the last case is clearly possible for the system to do\neverything in memory since all phases are being performed, but\ndepending on what we decide, maybe it will choose to use the dead-TID\nfork in some cases for some reason or other. If so, we might have\nexplicit syntax to override that behavior, e.g.\n\nVACUUM (PRUNE, SANITIZE, RECYCLE, TID_STORE 0) my_table;\n\nwhich might be able to be abbreviated, depending on how we set the\ndefaults, to just:\n\nVACUUM (TID_STORE 0) my_table;\n\nThis is all just hypothetical syntax and probably needs a good deal of\npolish and bike-shedding. But it would be really nice to standardize\non some set of terms like prune/sanitize/recycle or whatever we pick,\nbecause then we could document what they mean, use them in autovacuum\nlog messages, use them internally for function names, use them for\nVACUUM option names when we get to that point, etc. and the whole\nthing would be a good deal more comprehensible than at present.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 22 Apr 2021 14:37:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "Hi,\n\nOn 2021-04-22 11:30:21 -0700, Peter Geoghegan wrote:\n> I think that you're both missing very important subtleties here.\n> Apparently the \"quantitative vs qualitative\" distinction I like to\n> make hasn't cleared it up.\n\nI'm honestly getting a bit annoyed about this stuff. Yes it's a cool\nimprovement, but no, it doesn't mean that there aren't still relevant\nissues in important cases. It doesn't help that you repeatedly imply\nthat people that don't see it your way need to have their view \"cleared\nup\".\n\n\"Bottom up index deletion\" is practically *irrelevant* for a significant\nset of workloads.\n\n\n> You both seem to be assuming that everything would be fine if you\n> could somehow inexpensively know the total number of undeleted dead\n> tuples in each index at all times.\n\nI don't think we'd need an exact number. Just a reasonable approximation\nso we know whether it's worth spending time vacuuming some index.\n\n\n> But I don't think that that's true at all. I don't mean that it might\n> not be true. What I mean is that it's usually a meaningless number *on\n> its own*, at least if you assume that every index is either an nbtree\n> index (or an index that uses some other index AM that has the same\n> index deletion capabilities).\n\nYou also have to assume that you have roughly evenly distributed index\ninsertions and deletions. But workloads that insert into some parts of a\nvalue range and delete from another range are common.\n\nI even would say that *precisely* because \"Bottom up index deletion\" can\nbe very efficient in some workloads it is useful to have per-index stats\ndetermining whether an index should be vacuumed or not.\n\n\n> My mental models for index bloat usually involve imagining an\n> idealized version of a real world bloated index -- I compare the\n> empirical reality against an imagined idealized version. I then try to\n> find optimizations that make the reality approximate the idealized\n> version. Say a version of the same index in a traditional 2PL database\n> without MVCC, or in real world Postgres with VACUUM that magically\n> runs infinitely fast.\n> \n> Bottom-up index deletion usually leaves a huge number of\n> undeleted-though-dead index tuples untouched for hours, even when it\n> works perfectly. 10% - 30% of the index tuples might be\n> undeleted-though-dead at any given point in time (traditional B-Tree\n> space utilization math generally ensures that there is about that much\n> free space on each leaf page if we imagine no version churn/bloat --\n> we *naturally* have a lot of free space to work with). These are\n> \"Schrodinger's dead index tuples\". You could count them\n> mechanistically, but then you'd be counting index tuples that are\n> \"already dead and deleted\" in an important theoretical sense, despite\n> the fact that they are not yet literally deleted. That's why bottom-up\n> index deletion frequently avoids 100% of all unnecessary page splits.\n> The asymmetry that was there all along was just crazy. I merely had\n> the realization that it was there and could be exploited -- I didn't\n> create or invent the natural asymmetry.\n\nExcept that heap bloat not index bloat might be the more pressing\nconcern. Or that there will be no meaningful amount of bottom-up\ndeletions. Or ...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 22 Apr 2021 11:44:00 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Thu, Apr 22, 2021 at 10:28 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> The dead TID fork needs to also be efficiently searched. If the heap\n> scan runs twice, the collected dead TIDs on each heap pass could be\n> overlapped. But we would not be able to merge them if we did index\n> vacuuming on one of indexes at between those two heap scans. The\n> second time heap scan would need to record only TIDs that are not\n> collected by the first time heap scan.\n\nI agree that there's a problem here. It seems to me that it's probably\npossible to have a dead TID fork that implements \"throw away the\noldest stuff\" efficiently, and it's probably also possible to have a\nTID fork that can be searched efficiently. However, I am not sure that\nit's possible to have a dead TID fork that does both of those things\nefficiently. Maybe you have an idea. My intuition is that if we have\nto pick one, it's MUCH more important to be able to throw away the\noldest stuff efficiently. I think we can work around the lack of\nefficient lookup, but I don't see a way to work around the lack of an\nefficient operation to discard the oldest stuff.\n\n> Right. Given decoupling index vacuuming, I think the index’s garbage\n> statistics are important which preferably need to be fetchable without\n> accessing indexes. It would be not hard to estimate how many index\n> tuples might be able to be deleted by looking at the dead TID fork but\n> it doesn’t necessarily match the actual number.\n\nRight, and to appeal (I think) to Peter's quantitative vs. qualitative\nprinciple, it could be way off. Like, we could have a billion dead\nTIDs and in one index the number of index entries that need to be\ncleaned out could be 1 billion and in another index it could be zero\n(0). We know how much data we will need to scan because we can fstat()\nthe index, but there seems to be no easy way to estimate how many of\nthose pages we'll need to dirty, because we don't know how successful\nprevious opportunistic cleanup has been. It is not impossible, as\nPeter has pointed out a few times now, that it has worked perfectly\nand there will be no modifications required, but it is also possible\nthat it has done nothing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 22 Apr 2021 14:47:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "Hi,\n\nOn 2021-04-22 14:47:14 -0400, Robert Haas wrote:\n> On Thu, Apr 22, 2021 at 10:28 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > Right. Given decoupling index vacuuming, I think the index’s garbage\n> > statistics are important which preferably need to be fetchable without\n> > accessing indexes. It would be not hard to estimate how many index\n> > tuples might be able to be deleted by looking at the dead TID fork but\n> > it doesn’t necessarily match the actual number.\n> \n> Right, and to appeal (I think) to Peter's quantitative vs. qualitative\n> principle, it could be way off. Like, we could have a billion dead\n> TIDs and in one index the number of index entries that need to be\n> cleaned out could be 1 billion and in another index it could be zero\n> (0). We know how much data we will need to scan because we can fstat()\n> the index, but there seems to be no easy way to estimate how many of\n> those pages we'll need to dirty, because we don't know how successful\n> previous opportunistic cleanup has been.\n\nThat aspect seems reasonably easy to fix: We can start to report the\nnumber of opportunistically deleted index entries to pgstat. At least\nnbtree already performs the actual deletion in bulk and we already have\n(currently unused) space in the pgstat entries for it, so I don't think\nit'd meanginfully increase overhead. And it'd improve insight in how\nindexes operate significantly, even leaving autovacuum etc aside.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 22 Apr 2021 11:56:10 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Thu, Apr 22, 2021 at 11:44 AM Andres Freund <andres@anarazel.de> wrote:\n> I'm honestly getting a bit annoyed about this stuff.\n\nYou're easily annoyed.\n\n> Yes it's a cool\n> improvement, but no, it doesn't mean that there aren't still relevant\n> issues in important cases. It doesn't help that you repeatedly imply\n> that people that don't see it your way need to have their view \"cleared\n> up\".\n\nI don't think that anything that I've said about it contradicts\nanything that you or Robert said. What I said that you're missing a\ncouple of important subtleties (or that you seem to be). It's not\nreally about the optimization in particular -- it's about the\nsubtleties that it exploits. I think that they're generalizable. Even\nif there was only a 1% chance of that being true, it would still be\nworth exploring in depth.\n\nI think that everybody's beliefs about VACUUM tend to be correct. It\nalmost doesn't matter if scenario A is the problem in 90% or cases\nversus 10% of cases for scenario B (or vice-versa). What actually\nmatters is that we have good handling for both. (It's probably some\nweird combination of scenario A and scenario B in any case.)\n\n> \"Bottom up index deletion\" is practically *irrelevant* for a significant\n> set of workloads.\n\nYou're missing the broader point. Which is that we don't know how much\nit helps in each case, just as we don't know how much some other\ncomplementary optimization helps. It's important to develop\ncomplementary techniques precisely because (say) bottom-up index\ndeletion only solves one class of problem. And because it's so hard to\npredict.\n\nI actually went on at length about the cases that the optimization\n*doesn't* help. Because that'll be a disproportionate source of\nproblems now. And you really need to avoid all of the big sources of\ntrouble to get a really good outcome. Avoiding each and every source\nof trouble might be much much more useful than avoiding all but one.\n\n> > You both seem to be assuming that everything would be fine if you\n> > could somehow inexpensively know the total number of undeleted dead\n> > tuples in each index at all times.\n>\n> I don't think we'd need an exact number. Just a reasonable approximation\n> so we know whether it's worth spending time vacuuming some index.\n\nI agree.\n\n> You also have to assume that you have roughly evenly distributed index\n> insertions and deletions. But workloads that insert into some parts of a\n> value range and delete from another range are common.\n>\n> I even would say that *precisely* because \"Bottom up index deletion\" can\n> be very efficient in some workloads it is useful to have per-index stats\n> determining whether an index should be vacuumed or not.\n\nExactly!\n\n> Except that heap bloat not index bloat might be the more pressing\n> concern. Or that there will be no meaningful amount of bottom-up\n> deletions. Or ...\n\nExactly!\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 22 Apr 2021 12:10:54 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Thu, Apr 22, 2021 at 3:11 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I think that everybody's beliefs about VACUUM tend to be correct. It\n> almost doesn't matter if scenario A is the problem in 90% or cases\n> versus 10% of cases for scenario B (or vice-versa). What actually\n> matters is that we have good handling for both. (It's probably some\n> weird combination of scenario A and scenario B in any case.)\n\nI agree strongly with this. In fact, I seem to remember saying similar\nthings to you in the past. If something wins $1 in 90% of cases and\nloses $5 in 10% of cases, is it a good idea? Well, it depends on how\nthe losses are distributed. If every user can be expected to hit both\nwinning and losing cases with approximately those frequencies, then\nyes, it's a good idea, because everyone will come out ahead on\naverage. But if 90% of users will see only wins and 10% of users will\nsee only losses, it sucks.\n\nThat being said, I don't know what this really has to do with the\nproposal on the table, except in the most general sense. If you're\njust saying that decoupling stuff is good because different indexes\nhave different needs, I am in agreement, as I said in my OP. It sort\nof sounded like you were saying that it's not important to try to\nestimate the number of undeleted dead tuples in each index, which\npuzzled me, because while knowing doesn't mean everything is\nwonderful, not knowing it sure seems worse. But I guess maybe that's\nnot what you were saying, so I don't know. I feel like we're in danger\nof drifting into discussions about whether we're disagreeing with each\nother rather than, as I would like, focusing on how to design a system\nfor $SUBJECT.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 22 Apr 2021 15:27:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "Hi,\n\nOn 2021-04-22 12:15:27 -0400, Robert Haas wrote:\n> On Wed, Apr 21, 2021 at 5:38 PM Andres Freund <andres@anarazel.de> wrote:\n> > I'm not sure that's the only way to deal with this. While some form of\n> > generic \"conveyor belt\" infrastructure would be a useful building block,\n> > and it'd be sensible to use it here if it existed, it seems feasible to\n> > dead tids in a different way here. You could e.g. have per-heap-vacuum\n> > files with a header containing LSNs that indicate the age of the\n> > contents.\n>\n> That's true, but have some reservations about being overly reliant on\n> the filesystem to provide structure here. There are good reasons to be\n> worried about bloating the number of files in the data directory. Hmm,\n> but maybe we could mitigate that. First, we could skip this for small\n> relations. If you can vacuum the table and all of its indexes using\n> the naive algorithm in <10 seconds, you probably shouldn't do anything\n> fancy. That would *greatly* reduce the number of additional files\n> generated. Second, we could forget about treating them as separate\n> relation forks and make them some other kind of thing entirely, in a\n> separate directory\n\nI'm not *too* worried about this issue. IMO the big difference to the\ncost of additional relation forks is that such files would only exist\nwhen the table is modified to a somewhat meaningful degree. IME the\npractical issues with the number of files due to forks are cases where\nhuge number of tables that are practically never modified exist.\n\nThat's not to say that I am sure that some form of \"conveyor belt\"\nstorage *wouldn't* be the right thing. How were you thinking of dealing\nwith the per-relation aspects of this? One conveyor belt per relation?\n\n\n> especially if we adopted Sawada-san's proposal to skip WAL logging. I\n> don't know if that proposal is actually a good idea, because it\n> effectively adds a performance penalty when you crash or fail over,\n> and that sort of thing can be an unpleasant surprise. But it's\n> something to think about.\n\nI'm doubtful about skipping WAL logging entirely - I'd have to think\nharder about it, but I think that'd mean we'd restart from scratch after\ncrashes / immediate restarts as well, because we couldn't rely on the\ncontents of the \"dead tid\" files to be accurate. In addition to the\nreplication issues you mention.\n\n\n> > One thing that you didn't mention so far is that this'd allow us to add\n> > dead TIDs to the \"dead tid\" file outside of vacuum too. In some\n> > workloads most of the dead tuple removal happens as part of on-access\n> > HOT pruning. While some indexes are likely to see that via the\n> > killtuples logic, others may not. Being able to have more aggressive\n> > index vacuum for the one or two bloated index, without needing to rescan\n> > the heap, seems like it'd be a significant improvement.\n>\n> Oh, that's a very interesting idea. It does impose some additional\n> requirements on any such system, though, because it means you have to\n> be able to efficiently add single TIDs. For example, you mention a\n> per-heap-VACUUM file above, but you can't get away with creating a new\n> file per HOT prune no matter how you arrange things at the FS level.\n\nI agree that it'd be an issue, even though I think it's not too common\nthat only one tuple gets pruned. It might be possible to have a\nper-relation file per backend or such... But yes, we'd definitely have\nto think about it.\n\nI've previously pondered adding some cross-page batching and deferring\nof hot pruning in the read case, which I guess might be more\nadvantageous with this.\n\nThe main reason for thinking about batching & deferring of HOT pruning\nis that I found during the AIO work that there's speed gains to be head\nif we pad xlog pages instead of partially filling them - obviously\nrisking increasing WAL usage. One idea to reduce the cost of that was to\nfill the padded space with actually useful things, like FPIs or hot\npruning records. A related speedup opportunity with AIO is to perform\nuseful work while waiting for WAL flushes during commit (i.e. after\ninitiating IO to flush the commit record, but before that IO has\ncompleted).\n\n\n> Actually, though, I think the big problem here is deduplication. A\n> full-blown VACUUM can perhaps read all the already-known-to-be-dead\n> TIDs into some kind of data structure and avoid re-adding them, but\n> that's impractical for a HOT prune.\n\nWhat is there to deduplicate during HOT pruning? It seems that hot\npruning would need to log all items that it marks dead, but nothing\nelse? And that VACUUM can't yet have put those items onto the dead tuple\nmap, because they weren't yet?\n\n\nThis actually brings up a question I vaguely had to the fore: How are\nyou assuming indexes would access the list of dead tids? As far as I can\nsee the on-disk data would not be fully sorted even without adding\nthings during HOT pruning - the dead tids from a single heap pass will\nbe, but there'll be tids from multiple passes, right?\n\nAre you assuming that we'd read the data into memory and then merge-sort\nbetween each of the pre-sorted \"runs\"? Or that we'd read and cache parts\nof the on-disk data during index checks?\n\n\n> > Have you thought about how we would do the scheduling of vacuums for the\n> > different indexes? We don't really have useful stats for the number of\n> > dead index entries to be expected in an index. It'd not be hard to track\n> > how many entries are removed in an index via killtuples, but\n> > e.g. estimating how many dead entries there are in a partial index seems\n> > quite hard (at least without introducing significant overhead).\n>\n> No, I don't have any good ideas about that, really. Partial indexes\n> seem like a hard problem, and so do GIN indexes or other kinds of\n> things where you may have multiple index entries per heap tuple. We\n> might have to accept some known-to-be-wrong approximations in such\n> cases.\n\nThe gin case seems a bit easier than the partial index case. Keeping\nstats about the number of new entries in a GIN index doesn't seem too\nhard, nor does tracking the number of cleaned up index entries. But\nknowing which indexes are affected when a heap tuple becomes dead seems\nharder. I guess we could just start doing a stats-only version of\nExecInsertIndexTuples() for deletes, but obviously the cost of that is\nnot enticing. Perhaps it'd not be too bad if we only did it when there's\nan index with predicates?\n\n\n> > > One rather serious objection to this whole line of attack is that we'd\n> > > ideally like VACUUM to reclaim disk space without using any more, in\n> > > case the motivation for running VACUUM in the first place.\n> >\n> > I suspect we'd need a global limit of space used for this data. If above\n> > that limit we'd switch to immediately performing the work required to\n> > remove some of that space.\n>\n> I think that's entirely the wrong approach. On the one hand, it\n> doesn't prevent you from running out of disk space during emergency\n> maintenance, because the disk overall can be full even though you're\n> below your quota of space for this particular purpose. On the other\n> hand, it does subject you to random breakage when your database gets\n> big enough that the critical information can't be stored within the\n> configured quota.\n\nWhat random breakage are you thinking of? I'm not thinking of a hard\nlimit that may not be crossed at any cost, by even a single byte, but\nthat [auto]VACUUMs would start to be more aggressive about performing\nindex vacuums once the limit is reached.\n\n\n> I think we'd end up with pathological cases very much like what used\n> to happen with the fixed-size free space map. What happened there was\n> that your database got big enough that you couldn't track all the free\n> space any more and it just started bloating out the wazoo. What would\n> happen here is that you'd silently lose the well-optimized version of\n> VACUUM when your database gets too big. That does not seem like\n> something anybody wants.\n\nI don't think the consequences would really be that comparable. Once the\nFSM size was reached in the bad old days, we'd just loose track of of\nfree space. Whereas here we'd start to be more aggressive about cleaning\nup once the \"dead tids\" data reaches a certain size. Of course that\nwould have efficiency impacts, but I think \"global free space wasted\" is\na valid input in deciding when to perform index vacuums.\n\nI think max_wal_size has worked out pretty well, even if not perfect.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 22 Apr 2021 13:01:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Thu, Apr 22, 2021 at 12:27 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I agree strongly with this. In fact, I seem to remember saying similar\n> things to you in the past. If something wins $1 in 90% of cases and\n> loses $5 in 10% of cases, is it a good idea? Well, it depends on how\n> the losses are distributed. If every user can be expected to hit both\n> winning and losing cases with approximately those frequencies, then\n> yes, it's a good idea, because everyone will come out ahead on\n> average. But if 90% of users will see only wins and 10% of users will\n> see only losses, it sucks.\n\nRight. It's essential that we not disadvantage any workload by more\nthan a small fixed amount (and only with a huge upside elsewhere).\n\nThe even more general version is this: the average probably doesn't\neven exist in any meaningful sense.\n\nBottom-up index deletion tends to be effective either 100% of the time\nor 0% of the time, which varies on an index by index basis. Does that\nmean we should split the difference, and assume that it's effective\n50% of the time? Clearly not. Clearly that particular framing is just\nwrong. And clearly it basically doesn't matter if it's half of all\nindexes, or a quarter, or none, whatever. Because it's all of those\nproportions, and also because who cares.\n\n> That being said, I don't know what this really has to do with the\n> proposal on the table, except in the most general sense. If you're\n> just saying that decoupling stuff is good because different indexes\n> have different needs, I am in agreement, as I said in my OP.\n\nMostly what I'm saying is that I would like to put together a rough\nlist of things that we could do to improve VACUUM along the lines\nwe've discussed -- all of which stem from $SUBJECT. There are\nliterally dozens of goals (some of which are quite disparate) that we\ncould conceivably set out to pursue under the banner of $SUBJECT.\nIdeally there would be soft agreement about which ideas were more\npromising. Ideally we'd avoid painting ourselves into a corner with\nrespect to one of these goals, in pursuit of any other goal.\n\nI suspect that we'll need somewhat more of a top-down approach to this\nwork, which is something that we as a community don't have much\nexperience with. It might be useful to set the parameters of the\ndiscussion up-front, which seems weird to me too, but might actually\nhelp. (A lot of the current problems with VACUUM seem like they might\nbe consequences of pgsql-hackers not usually working like this.)\n\n> It sort\n> of sounded like you were saying that it's not important to try to\n> estimate the number of undeleted dead tuples in each index, which\n> puzzled me, because while knowing doesn't mean everything is\n> wonderful, not knowing it sure seems worse. But I guess maybe that's\n> not what you were saying, so I don't know.\n\nI agree that it matters that we are able to characterize how bloated a\npartial index is, because an improved VACUUM implementation will need\nto know that. My main point about that was that it's complicated in\nsurprising ways that actually matter. An approximate solution seems\nquite possible to me, but I think that that will probably have to\ninvolve the index AM directly.\n\nSometimes 10% - 30% of the extant physical index tuples will be dead\nand it'll be totally fine in every practical sense -- the index won't\nhave grown by even one page since the last VACUUM! Other times it\nmight be as few as 2% - 5% that are now dead when VACUUM is\nconsidered, which will in fact be a serious problem (e.g., it's\nconcentrated in one part of the keyspace, say). I would say that\nhaving some rough idea of which case we have on our hands is extremely\nimportant here. Even if the distinction only arises in rare cases\n(though FWIW I don't think that these differences will be rare at\nall).\n\n(I also tried to clarify what I mean about qualitative bloat in\npassing in my response about the case of a bloated partial index.)\n\n> I feel like we're in danger\n> of drifting into discussions about whether we're disagreeing with each\n> other rather than, as I would like, focusing on how to design a system\n> for $SUBJECT.\n\nWhile I am certainly guilty of being kind of hand-wavy and talking\nabout lots of stuff all at once here, it's still kind of unclear what\npractical benefits you hope to attain through $SUBJECT. Apart from the\nthing about global indexes, which matters but is hardly the\noverwhelming reason to do all this. I myself don't expect your goals\nto be super crisp just yet. As I said, I'm happy to talk about it in\nvery general terms at first -- isn't that what you were doing\nyourself?\n\nOr did I misunderstand -- are global indexes mostly all that you're\nthinking about here? (Even if they are all you care about, it still\nseems like you're still somewhat obligated to generalize the dead TID\nfork/map thing to help with a bunch of other things, just to justify\nthe complexity of adding a dead TID relfork.)\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 22 Apr 2021 13:51:51 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Thu, Apr 22, 2021 at 11:16 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > My most ambitious goal is finding a way to remove the need to freeze\n> > or to set hint bits. I think that we can do this by inventing a new\n> > kind of VACUUM just for aborted transactions, which doesn't do index\n> > vacuuming. You'd need something like an ARIES-style dirty page table\n> > to make this cheap -- so it's a little like UNDO, but not very much.\n>\n> I don't see how that works. An aborted transaction can have made index\n> entries, and those index entries can have already been moved by page\n> splits, and there can be arbitrarily many of them, so that you can't\n> keep track of them all in RAM. Also, you can crash after making the\n> index entries and writing them to the disk and before the abort\n> happens. Anyway, this is probably a topic for a separate thread.\n\nThis is a topic for a separate thread, but I will briefly address your question.\n\nUnder the scheme I've sketched, we never do index vacuuming when\ninvoking an autovacuum worker (or something like it) to clean-up after\nan aborted transaction. We track the pages that all transactions have\nmodified. If a transaction commits then we quickly discard the\nrelevant dirty page table metadata. If a transaction aborts\n(presumably a much rarer event), then we launch an autovacuum worker\nthat visits precisely those heap blocks that were modified by the\naborted transaction, and just prune each page, one by one. We have a\ncutoff that works a little like relfrozenxid, except that it tracks\nthe point in the XID space before which we know any XIDs (any XIDs\nthat we can read from extant tuple headers) must be committed.\n\nThe idea of a \"Dirty page table\" is standard ARIES. It'd be tricky to\nget it working, but still quite possible.\n\nThe overall goal of this design is for the system to be able to reason\nabout committed-ness inexpensively (to obviate the need for hint bits\nand per-tuple freezing). We want to be able to say for sure that\nalmost all heap blocks in the database only contain heap tuples whose\nheaders contain only committed XIDs, or LP_DEAD items that are simply\ndead (the exact provenance of these LP_DEAD items is not a concern,\njust like today). The XID cutoff for committed-ness could be kept\nquite recent due to the fact that aborted transactions are naturally\nrare. And because we can do relatively little work to \"logically roll\nback\" aborted transactions.\n\nNote that a heap tuple whose xmin and xmax are committed might also be\ndead under this scheme, since of course it might have been updated or\ndeleted by an xact that committed. We've effectively decoupled things\nby making aborted transactions special, and subject to very eager\ncleanup.\n\nI'm sure that there are significant challenges with making something\nlike this work. But to me this design seems roughly the right\ncombination of radical and conservative.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 22 Apr 2021 15:52:01 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Thu, Apr 22, 2021 at 3:52 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Thu, Apr 22, 2021 at 11:16 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > My most ambitious goal is finding a way to remove the need to freeze\n> > > or to set hint bits. I think that we can do this by inventing a new\n> > > kind of VACUUM just for aborted transactions, which doesn't do index\n> > > vacuuming. You'd need something like an ARIES-style dirty page table\n> > > to make this cheap -- so it's a little like UNDO, but not very much.\n> >\n> > I don't see how that works. An aborted transaction can have made index\n> > entries, and those index entries can have already been moved by page\n> > splits, and there can be arbitrarily many of them, so that you can't\n> > keep track of them all in RAM. Also, you can crash after making the\n> > index entries and writing them to the disk and before the abort\n> > happens. Anyway, this is probably a topic for a separate thread.\n>\n> This is a topic for a separate thread, but I will briefly address your question.\n>\n> Under the scheme I've sketched, we never do index vacuuming when\n> invoking an autovacuum worker (or something like it) to clean-up after\n> an aborted transaction. We track the pages that all transactions have\n> modified. If a transaction commits then we quickly discard the\n> relevant dirty page table metadata. If a transaction aborts\n> (presumably a much rarer event), then we launch an autovacuum worker\n> that visits precisely those heap blocks that were modified by the\n> aborted transaction, and just prune each page, one by one. We have a\n> cutoff that works a little like relfrozenxid, except that it tracks\n> the point in the XID space before which we know any XIDs (any XIDs\n> that we can read from extant tuple headers) must be committed.\n>\n> The idea of a \"Dirty page table\" is standard ARIES. It'd be tricky to\n> get it working, but still quite possible.\n>\n> The overall goal of this design is for the system to be able to reason\n> about committed-ness inexpensively (to obviate the need for hint bits\n> and per-tuple freezing). We want to be able to say for sure that\n> almost all heap blocks in the database only contain heap tuples whose\n> headers contain only committed XIDs, or LP_DEAD items that are simply\n> dead (the exact provenance of these LP_DEAD items is not a concern,\n> just like today). The XID cutoff for committed-ness could be kept\n> quite recent due to the fact that aborted transactions are naturally\n> rare. And because we can do relatively little work to \"logically roll\n> back\" aborted transactions.\n>\n> Note that a heap tuple whose xmin and xmax are committed might also be\n> dead under this scheme, since of course it might have been updated or\n> deleted by an xact that committed. We've effectively decoupled things\n> by making aborted transactions special, and subject to very eager\n> cleanup.\n>\n> I'm sure that there are significant challenges with making something\n> like this work. But to me this design seems roughly the right\n> combination of radical and conservative.\n\nI'll start a new thread now, as a placeholder for further discussion.\n\nThis would be an incredibly ambitious project, and I'm sure that this\nthread will be very hand-wavy at first. But you've got to start\nsomewhere.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 22 Apr 2021 17:39:54 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Getting rid of freezing and hint bits by eagerly vacuuming aborted\n xacts (was: decoupling table and index vacuum)" }, { "msg_contents": "On Fri, Apr 23, 2021 at 3:47 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Apr 22, 2021 at 10:28 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > The dead TID fork needs to also be efficiently searched. If the heap\n> > scan runs twice, the collected dead TIDs on each heap pass could be\n> > overlapped. But we would not be able to merge them if we did index\n> > vacuuming on one of indexes at between those two heap scans. The\n> > second time heap scan would need to record only TIDs that are not\n> > collected by the first time heap scan.\n>\n> I agree that there's a problem here. It seems to me that it's probably\n> possible to have a dead TID fork that implements \"throw away the\n> oldest stuff\" efficiently, and it's probably also possible to have a\n> TID fork that can be searched efficiently. However, I am not sure that\n> it's possible to have a dead TID fork that does both of those things\n> efficiently. Maybe you have an idea. My intuition is that if we have\n> to pick one, it's MUCH more important to be able to throw away the\n> oldest stuff efficiently. I think we can work around the lack of\n> efficient lookup, but I don't see a way to work around the lack of an\n> efficient operation to discard the oldest stuff.\n\nAgreed.\n\nI think we can divide the TID fork into 16MB or 32MB chunks like WAL\nsegment files so that we can easily remove old chunks. Regarding the\nefficient search part, I think we need to consider the case where the\nTID fork gets bigger than maintenance_work_mem. In that case, during\nthe heap scan, we need to check if the discovered TID exists in a\nchunk of the TID fork that could be on the disk. Even if all\nknown-dead-TIDs are loaded into an array on the memory, it could get\nmuch slower than the current heap scan to bsearch over the array for\neach dead TID discovered during heap scan. So it would be better to\nhave a way to skip searching by already recorded TIDs. For example,\nduring heap scan or HOT pruning, I think that when marking TIDs dead\nand recording it to the dead TID fork we can mark them “dead and\nrecorded” instead of just “dead” so that future heap scans can skip\nthose TIDs without existence check.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 23 Apr 2021 20:03:56 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Fri, Apr 23, 2021 at 7:04 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> I think we can divide the TID fork into 16MB or 32MB chunks like WAL\n> segment files so that we can easily remove old chunks. Regarding the\n> efficient search part, I think we need to consider the case where the\n> TID fork gets bigger than maintenance_work_mem. In that case, during\n> the heap scan, we need to check if the discovered TID exists in a\n> chunk of the TID fork that could be on the disk. Even if all\n> known-dead-TIDs are loaded into an array on the memory, it could get\n> much slower than the current heap scan to bsearch over the array for\n> each dead TID discovered during heap scan. So it would be better to\n> have a way to skip searching by already recorded TIDs. For example,\n> during heap scan or HOT pruning, I think that when marking TIDs dead\n> and recording it to the dead TID fork we can mark them “dead and\n> recorded” instead of just “dead” so that future heap scans can skip\n> those TIDs without existence check.\n\nI'm not very excited about this. If we did this, and if we ever\ngenerated dead-but-not-recorded TIDs, then we will potentially dirty\nthose blocks again later to mark them recorded.\n\nAlso, if bsearch() is a bottleneck, how about just using an O(1)\nalgorithm instead of an O(lg n) algorithm, rather than changing the\non-disk format?\n\nAlso, can you clarify exactly what you think the problem case is here?\nIt seems to me that:\n\n1. If we're pruning the heap to collect dead TIDs, we should stop when\nthe number of TIDs we've accumulated reaches maintenance_work_mem. It\nis possible that we could find when starting to prune that there are\n*already* more dead TIDs than will fit, because maintenance_work_mem\nmight have been reduced since they were gathered. But that's OK: we\ncan figure out that there are more than will fit without loading them\nall, and since we shouldn't do additional pruning in this case,\nthere's no issue.\n\n2. If we're sanitizing indexes, we should normally discover that there\nare few enough TIDs that we can still fit them all in memory. But if\nthat proves not to be the case, again because for example\nmaintenance_work_mem has been reduced, then we can handle that with\nmultiple index passes just as we do today.\n\n3. If we're going back to the heap to permit TIDs to be recycled by\nsetting dead line pointers to unused, we can load in as many of those\nas will fit in maintenance_work_mem, sort them by block number, and go\nthrough block by block and DTRT. Then, we can release all that memory\nand, if necessary, do the whole thing again. This isn't even\nparticularly inefficient.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 23 Apr 2021 11:21:52 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Thu, Apr 22, 2021 at 4:52 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Mostly what I'm saying is that I would like to put together a rough\n> list of things that we could do to improve VACUUM along the lines\n> we've discussed -- all of which stem from $SUBJECT. There are\n> literally dozens of goals (some of which are quite disparate) that we\n> could conceivably set out to pursue under the banner of $SUBJECT.\n\nI hope not. I don't have a clue why there would be dozens of possible\ngoals here, or why it matters. I think if we're going to do something\nlike $SUBJECT, we should just concentrate on the best way to make that\nparticular change happen with minimal change to anything else.\nOtherwise, we risk conflating this engineering effort with others that\nreally should be separate endeavors. For example, as far as possible,\nI think it would be best to try to do this without changing the\nstatistics that are currently gathered, and just make the best\ndecisions we can with the information we already have. Ideally, I'd\nlike to avoid introducing a new kind of relation fork that uses a\ndifferent on-disk storage format (e.g. 16MB segments that are dropped\nfrom the tail) rather than the one used by the other forks, but I'm\nnot sure we can get away with that, because conveyor-belt storage\nlooks pretty appealing here. Regardless, the more we have to change to\naccomplish the immediate goal, the more likely we are to introduce\ninstability into places where it could have been avoided, or to get\ntangled up in endless bikeshedding.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 23 Apr 2021 11:44:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Fri, Apr 23, 2021 at 8:44 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Apr 22, 2021 at 4:52 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Mostly what I'm saying is that I would like to put together a rough\n> > list of things that we could do to improve VACUUM along the lines\n> > we've discussed -- all of which stem from $SUBJECT. There are\n> > literally dozens of goals (some of which are quite disparate) that we\n> > could conceivably set out to pursue under the banner of $SUBJECT.\n>\n> I hope not. I don't have a clue why there would be dozens of possible\n> goals here, or why it matters.\n\nNot completely distinct goals, for the most part, but I can certainly\nsee dozens of benefits.\n\nFor example, if we know before index vacuuming starts that heap\nvacuuming definitely won't go ahead (quite possible when we decide\nthat we're only vacuuming a subset of indexes), we can then tell the\nindex AM about that fact. It can then safely vacuum in an\n\"approximate\" fashion, for example by skipping pages whose LSNs are\nfrom before the last VACUUM, and by not bothering with a\nsuper-exclusive lock in the case of nbtree.\n\nThe risk of a conflict between this goal and another goal that we may\nwant to pursue (which might be a bit contrived) is that we fail to do\nthe right thing when a large range deletion has taken place, which\nmust be accounted in the statistics, but creates a tension with the\nglobal index stuff. It's probably only safe to do this when we know\nthat there have been hardly any DELETEs. There is also the question of\nhow the TID map thing interacts with the visibility map, and how that\naffects how VACUUM behaves (both in general and in order to attain\nsome kind of specific new benefit from this synergy).\n\nWho knows? We're never going to get on exactly the same page, but some\nrough idea of which page each of us are on might save everybody time.\n\nThe stuff that I went into about making aborted transactions special\nas a means of decoupling transaction status management from garbage\ncollection is arguably totally unrelated -- perhaps it's just too much\nof a stretch to link that to what you want to do now. I suppose it's\nhard to invest the time to engage with me on that stuff, and I\nwouldn't be surprised if you never did so. If it doesn't happen it\nwould be understandable, though quite possibly a missed opportunity\nfor both of us. My basic intuition there is that it's another variety\nof decoupling, so (for better or worse) it does *seem* related to me.\n(I am an intuitive thinker, which has advantages and disadvantages.)\n\n> I think if we're going to do something\n> like $SUBJECT, we should just concentrate on the best way to make that\n> particular change happen with minimal change to anything else.\n> Otherwise, we risk conflating this engineering effort with others that\n> really should be separate endeavors.\n\nOf course it's true that that is a risk. That doesn't mean that the\nopposite risk is not also a concern. I am concerned about both risks.\nI'm not sure which risk I should be more concerned about.\n\nI agree that we ought to focus on a select few goals as part of the\nfirst round of work in this area (without necessarily doing all or\neven most of them at the same time). It's not self-evident which goals\nthose should be at this point, though. You've said that you're\ninterested in global indexes. Okay, that's a start. I'll add the basic\nidea of not doing index vacuuming for some indexes and not others to\nthe list -- this will necessitate that we teach index AMs to assess\nhow much bloat the index has accumulated since the last VACUUM, which\npresumably must work in some generalized, composable way.\n\n> For example, as far as possible,\n> I think it would be best to try to do this without changing the\n> statistics that are currently gathered, and just make the best\n> decisions we can with the information we already have.\n\nI have no idea if that's the right way to do it. In any case the\nstatistics that we gather influence the behavior of autovacuum.c, but\nnothing stops us from doing our own dynamic gathering of statistics to\ndecide what we should do within vacuumlazy.c each time. We don't have\nto change the basic triggering conditions to change the work each\nVACUUM performs.\n\nAs I've said before, I think that we're likely to get more benefit (at\nleast at first) from making the actual reality of what VACUUM does\nsimpler and more predictable in practice than we are from changing how\nreality is modeled inside autovacuum.c. I'll go further with that now:\nif we do change that modelling at some point, I think that it should\nwork in an additive way, which can probably be compatible with how the\nstatistics and so on work already. For example, maybe vacuumlazy.c\nasks autovacuum.c to do a VACUUM earlier next time. This can be\nstructured as an exception to the general rule of autovacuum\nscheduling, probably -- something that occurs when it becomes evident\nthat the generic schedule isn't quite cutting it in some important,\nspecific way.\n\n> Ideally, I'd\n> like to avoid introducing a new kind of relation fork that uses a\n> different on-disk storage format (e.g. 16MB segments that are dropped\n> from the tail) rather than the one used by the other forks, but I'm\n> not sure we can get away with that, because conveyor-belt storage\n> looks pretty appealing here.\n\nNo opinion on that just yet.\n\n> Regardless, the more we have to change to\n> accomplish the immediate goal, the more likely we are to introduce\n> instability into places where it could have been avoided, or to get\n> tangled up in endless bikeshedding.\n\nCertainly true. I'm not really trying to convince you of specific\nactionable points just yet, though. Perhaps that was the problem (or\nperhaps it simply led to miscommunication). It would be so much easier\nto discuss some of this stuff at an event like pgCon. Oh well.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 23 Apr 2021 11:55:53 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Thu, Apr 22, 2021 at 1:01 PM Andres Freund <andres@anarazel.de> wrote:\n> The gin case seems a bit easier than the partial index case. Keeping\n> stats about the number of new entries in a GIN index doesn't seem too\n> hard, nor does tracking the number of cleaned up index entries. But\n> knowing which indexes are affected when a heap tuple becomes dead seems\n> harder. I guess we could just start doing a stats-only version of\n> ExecInsertIndexTuples() for deletes, but obviously the cost of that is\n> not enticing. Perhaps it'd not be too bad if we only did it when there's\n> an index with predicates?\n\nThough I agree that we need some handling here, I doubt that an index\nwith a predicate is truly a special case.\n\nSuppose you have a partial index that covers 10% of the table. How is\nthat meaningfully different from an index without a predicate that is\notherwise equivalent? If the churn occurs in the same keyspace in\neither case, and if that's the part of the keyspace that queries care\nabout, then ISTM that the practical difference is fairly\ninsignificant. (If you have some churn all over the standard index by\nqueries are only interested in the same 10% of the full keyspace then\nthis will be less true, but still roughly true.)\n\nThere is an understandable tendency to focus on the total size of the\nindex in each case, and to be alarmed that the partial index has (say)\ndoubled in size, while at the same time not being overly concerned\nabout lower *proportionate* growth for the standard index case\n(assuming otherwise identical workload/conditions). The page splits\nthat affect the same 10% of the key space in each case will be\napproximately as harmful in each case, though. We can expect the same\ngrowth in leaf pages in each case, which will look very similar.\n\nIt should be obvious that it's somewhat of a problem that 90% of the\nstandard index is apparently not useful (this is an unrelated\nproblem). But if the DBA fixes this unrelated problem (by making the\nstandard index a partial index), surely it would be absurd to then\nconclude that that helpful intervention somehow had the effect of\nmaking the index bloat situation much worse!\n\nI think that a simple heuristic could work very well here, but it\nneeds to be at least a little sensitive to the extremes. And I mean\nall of the extremes, not just the one from my example -- every\nvariation exists and will cause problems if given zero weight.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 23 Apr 2021 13:04:41 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Fri, Apr 23, 2021 at 1:04 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I think that a simple heuristic could work very well here, but it\n> needs to be at least a little sensitive to the extremes. And I mean\n> all of the extremes, not just the one from my example -- every\n> variation exists and will cause problems if given zero weight.\n\nTo expand on this a bit, my objection to counting the number of live\ntuples in the index (as a means to determining how aggressively each\nindividual index needs to be vacuumed) is this: it's driven by\npositive feedback, not negative feedback. We should focus on *extreme*\nadverse events (e.g., version-driven page splits) instead. We don't\neven need to understand ordinary adverse events (e.g., how many dead\ntuples are in the index).\n\nThe cost of accumulating dead tuples in an index (could be almost any\nindex AM) grows very slowly at first, and then suddenly explodes\n(actually it's more like a cascade of correlated explosions, but for\nthe purposes of this explanation that doesn't matter). In a way, this\nmakes life easy for us. The cost of accumulating dead tuples rises so\ndramatically at a certain inflection point that we can reasonably\nassume that that's all that matters -- just stop the explosions. An\nextremely simple heuristic that prevents these extreme adverse events\ncan work very well because that's where almost all of the possible\ndownside is. We can be sure that these extreme adverse events are\nuniversally very harmful (workload doesn't matter). Note that the same\nis not true for an approach driven by positive feedback -- it'll be\nfragile because it depends on workload characteristics in unfathomably\nmany ways. We should focus on what we can understand with a high\ndegree of confidence.\n\nWe just need to identify what the extreme adverse event is in each\nindex AM, count them, and focus on those (could be a VACUUM thing,\ncould be local to the index AM like bottom-up deletion is). We need to\nnotice when things are *starting* to go really badly and intervene\naggressively. So we need to be willing to try a generic index\nvacuuming strategy first, and then notice that it has just failed, or\nis just about to fail. Something like version-driven page splits\nreally shouldn't ever happen, so even a very crude approach will\nprobably work very well.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 24 Apr 2021 11:21:49 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "Hi,\n\nOn 2021-04-24 11:21:49 -0700, Peter Geoghegan wrote:\n> To expand on this a bit, my objection to counting the number of live\n> tuples in the index (as a means to determining how aggressively each\n> individual index needs to be vacuumed) is this: it's driven by\n> positive feedback, not negative feedback. We should focus on *extreme*\n> adverse events (e.g., version-driven page splits) instead. We don't\n> even need to understand ordinary adverse events (e.g., how many dead\n> tuples are in the index).\n\nI don't see how that's good enough as a general approach. It won't work\non indexes that insert on one end, delete from the other (think\ninserted_at or serial primary keys in many workloads).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 24 Apr 2021 11:43:12 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Sat, Apr 24, 2021 at 11:43 AM Andres Freund <andres@anarazel.de> wrote:\n> I don't see how that's good enough as a general approach. It won't work\n> on indexes that insert on one end, delete from the other (think\n> inserted_at or serial primary keys in many workloads).\n\nThat can be treated as another extreme that we need to treat as\nnegative feedback. There may also be other types of negative feedback\nthat occur only in some index AMs, that neither of us have thought of\njust yet. But that's okay -- we can just add that to the list. Some\nvarieties of negative feedback might be much more common in practice\nthan others. This shouldn't matter.\n\nThe number of live tuples (or even dead tuples) in the whole entire\nindex is simply not a useful proxy for what actually matters -- this\nis 100% clear. There are many cases where this will do completely the\nwrong thing, even if we have perfectly accurate information. I'll\nspare you a repeat of the example of bottom-up index deletion and\n\"Schrodinger's dead index tuples\" (it's not the only example, just the\npurest).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 24 Apr 2021 11:59:29 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "Hi,\n\nOn 2021-04-24 11:59:29 -0700, Peter Geoghegan wrote:\n> The number of live tuples (or even dead tuples) in the whole entire\n> index is simply not a useful proxy for what actually matters -- this\n> is 100% clear.\n\nDid anybody actually argue for using #live entries directly? I think\n*dead* entries is more relevant, partiuclarly because various forms of\nlocal cleanup can be taken into account. Live tuples might come in to\nput the number of dead tuples into perspective, but otherwise not that\nmuch?\n\n\n> There are many cases where this will do completely the wrong thing,\n> even if we have perfectly accurate information.\n\nImo the question isn't really whether criteria will ever do something\nwrong, but how often and how consequential such mistakes will\nbe. E.g. unnecessarily vacuuming an index isn't fun, but it's better\nthan ending up not never cleaning up dead index pointers despite repeat\naccesses (think bitmap scans).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 24 Apr 2021 12:56:43 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Sat, Apr 24, 2021 at 12:56 PM Andres Freund <andres@anarazel.de> wrote:\n> Did anybody actually argue for using #live entries directly? I think\n> *dead* entries is more relevant, partiuclarly because various forms of\n> local cleanup can be taken into account. Live tuples might come in to\n> put the number of dead tuples into perspective, but otherwise not that\n> much?\n\nI was unclear. I can't imagine how you'd do anything like this without\nusing both together. Or if you didn't use live tuples you'd use heap\nblocks instead. Something like that.\n\n> > There are many cases where this will do completely the wrong thing,\n> > even if we have perfectly accurate information.\n>\n> Imo the question isn't really whether criteria will ever do something\n> wrong, but how often and how consequential such mistakes will\n> be. E.g. unnecessarily vacuuming an index isn't fun, but it's better\n> than ending up not never cleaning up dead index pointers despite repeat\n> accesses (think bitmap scans).\n\nI strongly agree. The risk with what I propose is that we'd somehow\noverlook a relevant extreme cost. But I think that that's an\nacceptable risk. Plus I see no workable alternative -- your \"indexes\nthat insert on one end, delete from the other\" example works much\nbetter as an argument against what you propose than an argument\nagainst my own alternative proposal. Which reminds me: how would your\nframework for index bloat/skipping indexes in VACUUM deal cope with\nthis same scenario?\n\nThough I don't think that it's useful to use quantitative thinking as\na starting point here, that doesn't mean there is exactly zero role\nfor it. Not sure about how far I'd go here. But I would probably not\nargue that we shouldn't vacuum an index that is known to (say) be more\nthan 60% dead tuples. I guess I'd always prefer to have a better\nmetric, but speaking hypothetically: Why take a chance? This is not\nbecause it's definitely worth it -- it really isn't! It's just because\nthe benefit of being right is low compared to the cost of being wrong\n-- as you point out, that is really important.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 24 Apr 2021 13:17:22 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Sat, Apr 24, 2021 at 1:17 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Sat, Apr 24, 2021 at 12:56 PM Andres Freund <andres@anarazel.de> wrote:\n> > Imo the question isn't really whether criteria will ever do something\n> > wrong, but how often and how consequential such mistakes will\n> > be. E.g. unnecessarily vacuuming an index isn't fun, but it's better\n> > than ending up not never cleaning up dead index pointers despite repeat\n> > accesses (think bitmap scans).\n>\n> I strongly agree. The risk with what I propose is that we'd somehow\n> overlook a relevant extreme cost. But I think that that's an\n> acceptable risk.\n\nIMV the goal here is not really to skip index vacuuming when it's\nunnecessary. The goal is to do *more* index vacuuming when and where\nit *is* necessary (in one problematic index among several) -- maybe\neven much much more. We currently treat index vacuuming as an\nall-or-nothing thing at the level of the table, which makes this\nimpossible.\n\nThis is another reason why we can be pretty conservative about\nskipping. We only need to skip index vacuuming those indexes that\nwe're pretty confident just don't need it -- that's sufficient to be\nable to do vastly more index vacuuming where it is needed in almost\nall cases. There is some gray area, but that seems much less\ninteresting to me.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 24 Apr 2021 13:39:03 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Sat, Apr 24, 2021 at 12:22 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Apr 23, 2021 at 7:04 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I think we can divide the TID fork into 16MB or 32MB chunks like WAL\n> > segment files so that we can easily remove old chunks. Regarding the\n> > efficient search part, I think we need to consider the case where the\n> > TID fork gets bigger than maintenance_work_mem. In that case, during\n> > the heap scan, we need to check if the discovered TID exists in a\n> > chunk of the TID fork that could be on the disk. Even if all\n> > known-dead-TIDs are loaded into an array on the memory, it could get\n> > much slower than the current heap scan to bsearch over the array for\n> > each dead TID discovered during heap scan. So it would be better to\n> > have a way to skip searching by already recorded TIDs. For example,\n> > during heap scan or HOT pruning, I think that when marking TIDs dead\n> > and recording it to the dead TID fork we can mark them “dead and\n> > recorded” instead of just “dead” so that future heap scans can skip\n> > those TIDs without existence check.\n>\n> I'm not very excited about this. If we did this, and if we ever\n> generated dead-but-not-recorded TIDs, then we will potentially dirty\n> those blocks again later to mark them recorded.\n\nSince the idea I imagined is that we always mark a TID recorded at the\nsame time when marking it dead we don't dirty the page again, but,\nyes, if we do that the recorded flag is not necessary. We can simply\nthink that TID marked dead is recorded to the TID fork. Future vacuum\ncan skip TID that are already marked dead.\n\n>\n> Also, if bsearch() is a bottleneck, how about just using an O(1)\n> algorithm instead of an O(lg n) algorithm, rather than changing the\n> on-disk format?\n>\n> Also, can you clarify exactly what you think the problem case is here?\n> It seems to me that:\n>\n> 1. If we're pruning the heap to collect dead TIDs, we should stop when\n> the number of TIDs we've accumulated reaches maintenance_work_mem. It\n> is possible that we could find when starting to prune that there are\n> *already* more dead TIDs than will fit, because maintenance_work_mem\n> might have been reduced since they were gathered. But that's OK: we\n> can figure out that there are more than will fit without loading them\n> all, and since we shouldn't do additional pruning in this case,\n> there's no issue.\n\nThe case I'm thinking is that pruning the heap and sanitizing indexes\nare running concurrently as you mentioned that concurrency is one of\nthe benefits of decoupling vacuum phases. In that case, one process is\ndoing index vacuuming using known-dead-TIDs in the TID fork while\nanother process is appending new dead TIDs. We can suspend heap\npruning until the size of the TID fork gets smaller as you mentioned\nbut it seems inefficient.\n\n>\n> 2. If we're sanitizing indexes, we should normally discover that there\n> are few enough TIDs that we can still fit them all in memory. But if\n> that proves not to be the case, again because for example\n> maintenance_work_mem has been reduced, then we can handle that with\n> multiple index passes just as we do today.\n\nYeah, there seems to be room for improvement but not worse than today.\nI imagine users will want to set a high maintenance_work_mem for\nsanitizing global index separately from the setting for heap pruning.\n\n>\n> 3. If we're going back to the heap to permit TIDs to be recycled by\n> setting dead line pointers to unused, we can load in as many of those\n> as will fit in maintenance_work_mem, sort them by block number, and go\n> through block by block and DTRT. Then, we can release all that memory\n> and, if necessary, do the whole thing again. This isn't even\n> particularly inefficient.\n\nAgreed.\n\nJust an idea: during pruning the heap, we can buffer the collected\ndead TIDs before writing the TID fork to the disk so that we can sort\nthe dead TIDs in a chunk (say a 16MB chunk consists of 8KB blocks)? We\nwrite the chunk to the disk either when the chunk filled with dead\nTIDs or when index sanitizing starts. The latter timing is required to\nremember the chunk ID or uint64 ID of dead TID indicating how far\nindex sanitizing removed dead TIDs up to. One of the benefits would be\nto reduce the disk I/O for the dead TID fork. Another would be we’re\nlikely to complete the recycle phase in one heap scan since we load\nonly one block per chunk during scanning the heap.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 26 Apr 2021 09:32:22 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Fri, Apr 23, 2021 at 5:01 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-04-22 12:15:27 -0400, Robert Haas wrote:\n> > On Wed, Apr 21, 2021 at 5:38 PM Andres Freund <andres@anarazel.de> wrote:\n> > > I'm not sure that's the only way to deal with this. While some form of\n> > > generic \"conveyor belt\" infrastructure would be a useful building block,\n> > > and it'd be sensible to use it here if it existed, it seems feasible to\n> > > dead tids in a different way here. You could e.g. have per-heap-vacuum\n> > > files with a header containing LSNs that indicate the age of the\n> > > contents.\n> >\n> > That's true, but have some reservations about being overly reliant on\n> > the filesystem to provide structure here. There are good reasons to be\n> > worried about bloating the number of files in the data directory. Hmm,\n> > but maybe we could mitigate that. First, we could skip this for small\n> > relations. If you can vacuum the table and all of its indexes using\n> > the naive algorithm in <10 seconds, you probably shouldn't do anything\n> > fancy. That would *greatly* reduce the number of additional files\n> > generated. Second, we could forget about treating them as separate\n> > relation forks and make them some other kind of thing entirely, in a\n> > separate directory\n>\n> I'm not *too* worried about this issue. IMO the big difference to the\n> cost of additional relation forks is that such files would only exist\n> when the table is modified to a somewhat meaningful degree. IME the\n> practical issues with the number of files due to forks are cases where\n> huge number of tables that are practically never modified exist.\n>\n> That's not to say that I am sure that some form of \"conveyor belt\"\n> storage *wouldn't* be the right thing. How were you thinking of dealing\n> with the per-relation aspects of this? One conveyor belt per relation?\n>\n>\n> > especially if we adopted Sawada-san's proposal to skip WAL logging. I\n> > don't know if that proposal is actually a good idea, because it\n> > effectively adds a performance penalty when you crash or fail over,\n> > and that sort of thing can be an unpleasant surprise. But it's\n> > something to think about.\n>\n> I'm doubtful about skipping WAL logging entirely - I'd have to think\n> harder about it, but I think that'd mean we'd restart from scratch after\n> crashes / immediate restarts as well, because we couldn't rely on the\n> contents of the \"dead tid\" files to be accurate. In addition to the\n> replication issues you mention.\n\nYeah, not having WAL would have a big negative impact on other various\naspects. Can we piggyback the WAL for the TID fork and\nXLOG_HEAP2_PRUNE? That is, we add the buffer for the TID fork to\nXLOG_HEAP2_PRUNE and record one 64-bit number of the first dead TID in\nthe list so that we can add dead TIDs to the TID fork during replaying\nXLOG_HEAP2_PRUNE.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 6 May 2021 11:56:24 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Thu, May 6, 2021 at 8:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> > I'm doubtful about skipping WAL logging entirely - I'd have to think\n> > harder about it, but I think that'd mean we'd restart from scratch after\n> > crashes / immediate restarts as well, because we couldn't rely on the\n> > contents of the \"dead tid\" files to be accurate. In addition to the\n> > replication issues you mention.\n>\n> Yeah, not having WAL would have a big negative impact on other various\n> aspects. Can we piggyback the WAL for the TID fork and\n> XLOG_HEAP2_PRUNE? That is, we add the buffer for the TID fork to\n> XLOG_HEAP2_PRUNE and record one 64-bit number of the first dead TID in\n> the list so that we can add dead TIDs to the TID fork during replaying\n> XLOG_HEAP2_PRUNE.\n\nThat could be an option but we need to be careful about the buffer\nlock order because now we will have to hold the lock on the TID fork\nbuffer as well as the heap buffer so that we don't create any\ndeadlock. And there is also a possibility of holding the lock on\nmultiple TID fork buffers, which will depend upon how many tid we have\npruned.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 May 2021 12:08:04 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Thu, May 6, 2021 at 3:38 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, May 6, 2021 at 8:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > > I'm doubtful about skipping WAL logging entirely - I'd have to think\n> > > harder about it, but I think that'd mean we'd restart from scratch after\n> > > crashes / immediate restarts as well, because we couldn't rely on the\n> > > contents of the \"dead tid\" files to be accurate. In addition to the\n> > > replication issues you mention.\n> >\n> > Yeah, not having WAL would have a big negative impact on other various\n> > aspects. Can we piggyback the WAL for the TID fork and\n> > XLOG_HEAP2_PRUNE? That is, we add the buffer for the TID fork to\n> > XLOG_HEAP2_PRUNE and record one 64-bit number of the first dead TID in\n> > the list so that we can add dead TIDs to the TID fork during replaying\n> > XLOG_HEAP2_PRUNE.\n>\n> That could be an option but we need to be careful about the buffer\n> lock order because now we will have to hold the lock on the TID fork\n> buffer as well as the heap buffer so that we don't create any\n> deadlock. And there is also a possibility of holding the lock on\n> multiple TID fork buffers, which will depend upon how many tid we have\n> pruned.\n\nNot sure we will need to hold buffer locks for both the TID fork and\nthe heap at the same time but I agree that we could need to lock on\nmultiple TID fork buffers. We could need to add dead TIDs to up to two\npages for the TID fork during replaying XLOG_HEAP2_PRUNE since we\nwrite it per heap pages. Probably we can process one by one.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 6 May 2021 18:02:13 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Thu, May 6, 2021 at 5:02 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> Not sure we will need to hold buffer locks for both the TID fork and\n> the heap at the same time but I agree that we could need to lock on\n> multiple TID fork buffers. We could need to add dead TIDs to up to two\n> pages for the TID fork during replaying XLOG_HEAP2_PRUNE since we\n> write it per heap pages. Probably we can process one by one.\n\nIt seems like we do need to hold them at the same time, because\ntypically for a WAL record you lock all the buffers, modify them all\nwhile writing the WAL record, and then unlock them all.\n\nNow maybe there's some argument that we can dodge that requirement\nhere, but I have reservations about departing from the usual locking\npattern. It's easier to reason about the behavior when everybody\nfollows the same set of rules.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 May 2021 06:19:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Thu, May 6, 2021 at 7:19 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, May 6, 2021 at 5:02 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > Not sure we will need to hold buffer locks for both the TID fork and\n> > the heap at the same time but I agree that we could need to lock on\n> > multiple TID fork buffers. We could need to add dead TIDs to up to two\n> > pages for the TID fork during replaying XLOG_HEAP2_PRUNE since we\n> > write it per heap pages. Probably we can process one by one.\n>\n> It seems like we do need to hold them at the same time, because\n> typically for a WAL record you lock all the buffers, modify them all\n> while writing the WAL record, and then unlock them all.\n>\n> Now maybe there's some argument that we can dodge that requirement\n> here, but I have reservations about departing from the usual locking\n> pattern. It's easier to reason about the behavior when everybody\n> follows the same set of rules.\n\nYes, agreed. I was thinking of replaying WAL, not writing WAL.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 6 May 2021 19:42:14 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Thu, 6 May 2021 at 4:12 PM, Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> On Thu, May 6, 2021 at 7:19 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Thu, May 6, 2021 at 5:02 AM Masahiko Sawada <sawada.mshk@gmail.com>\n> wrote:\n> > > Not sure we will need to hold buffer locks for both the TID fork and\n> > > the heap at the same time but I agree that we could need to lock on\n> > > multiple TID fork buffers. We could need to add dead TIDs to up to two\n> > > pages for the TID fork during replaying XLOG_HEAP2_PRUNE since we\n> > > write it per heap pages. Probably we can process one by one.\n> >\n> > It seems like we do need to hold them at the same time, because\n> > typically for a WAL record you lock all the buffers, modify them all\n> > while writing the WAL record, and then unlock them all.\n> >\n> > Now maybe there's some argument that we can dodge that requirement\n> > here, but I have reservations about departing from the usual locking\n> > pattern. It's easier to reason about the behavior when everybody\n> > follows the same set of rules.\n>\n> Yes, agreed. I was thinking of replaying WAL, not writing WAL.\n\n\nRight, I was pointing to while writing the WAL.\n\n> --\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, 6 May 2021 at 4:12 PM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:On Thu, May 6, 2021 at 7:19 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, May 6, 2021 at 5:02 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > Not sure we will need to hold buffer locks for both the TID fork and\n> > the heap at the same time but I agree that we could need to lock on\n> > multiple TID fork buffers. We could need to add dead TIDs to up to two\n> > pages for the TID fork during replaying XLOG_HEAP2_PRUNE since we\n> > write it per heap pages. Probably we can process one by one.\n>\n> It seems like we do need to hold them at the same time, because\n> typically for a WAL record you lock all the buffers, modify them all\n> while writing the WAL record, and then unlock them all.\n>\n> Now maybe there's some argument that we can dodge that requirement\n> here, but I have reservations about departing from the usual locking\n> pattern. It's easier to reason about the behavior when everybody\n> follows the same set of rules.\n\nYes, agreed. I was thinking of replaying WAL, not writing WAL.Right, I was pointing to while writing the WAL.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 6 May 2021 18:02:29 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "Andres Freund <andres@anarazel.de> wrote:\n\n> On 2021-04-21 11:21:31 -0400, Robert Haas wrote:\n> > This scheme adds a lot of complexity, which is a concern, but it seems\n> > to me that it might have several benefits. One is concurrency. You\n> > could have one process gathering dead TIDs and adding them to the\n> > dead-TID fork while another process is vacuuming previously-gathered\n> > TIDs from some index.\n> \n> I think it might even open the door to using multiple processes\n> gathering dead TIDs for the same relation.\n\nI think the possible concurrency improvements are themselves a valid reason to\ndo the decoupling. Or rather it's hard to imagine how the current\nimplementation of VACUUM can get parallel workers involved in gathering the\ndead heap TIDs efficiently. Currently, a single backend gathers the heap TIDs,\nand it can then launch several parallel workers to remove the TIDs from\nindexes. If parallel workers gathered the heap TIDs, then (w/o the decoupling)\nthe parallel index processing would be a problem because a parallel worker\ncannot launch other parallel workers.\n\n> > In fact, every index could be getting vacuumed at the same time, and\n> > different indexes could be removing different TID ranges.\n> \n> We kind of have this feature right now, due to parallel vacuum...\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Wed, 23 Jun 2021 11:26:32 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Wed, Apr 21, 2021 at 8:21 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Now, the reason for this is that when we discover dead TIDs, we only\n> record them in memory, not on disk. So, as soon as VACUUM ends, we\n> lose all knowledge of whether those TIDs were and must rediscover\n> them. Suppose we didn't do this, and instead had a \"dead TID\" fork for\n> each table. Suppose further that this worked like a conveyor belt,\n> similar to WAL, where every dead TID we store into the fork is\n> assigned an identifying 64-bit number that is never reused.\n\nHave you started any work on this project? I think that it's a very good idea.\n\nEnabling index-only scans is a good enough reason to pursue this\nproject, even on its own. The flexibility that this design offers\nallows VACUUM to run far more aggressively, with little possible\ndownside. It makes it possible for VACUUM to run so frequently that it\nrarely dirties pages most of the time -- at least in many important\ncases. Imagine if VACUUM almost kept in lockstep with inserters into\nan append-mostly table -- that would be great. The main blocker to\nmaking VACUUM behave like that is of course indexes.\n\nSetting visibility map bits during VACUUM can make future vacuuming\ncheaper (for the obvious reason), which *also* makes it cheaper to set\n*most* visibility map bits as the table is further extended, which in\nturn makes future vacuuming cheaper...and so on. This virtuous circle\nseems like it might be really important. Especially once you factor in\nthe cost of dirtying pages a second or a third time. I think that we\ncan really keep the number of times VACUUM dirties pages under\ncontrol, simply by decoupling. Decoupling is key to keeping the costs\nto a minimum.\n\nI attached a POC autovacuum logging instrumentation patch that shows\nhow VACUUM uses *and* sets VM bits. I wrote this for my TPC-C + FSM\nwork. Seeing both things together, and seeing how both things *change*\nover time was a real eye opener for me: it turns out that the master\nbranch keeps setting and resetting VM bit pages in the two big\nappend-mostly tables that are causing so much trouble for Postgres\ntoday. What we see right now is pretty disorderly -- the numbers don't\ntrend in the right direction when they should. But it could be a lot\nmore orderly, with a little work.\n\nThis instrumentation helped me to discover a better approach to\nindexing within TPC-C, based on index-only scans [1]. It also made me\nrealize that it's possible for a table to have real problems with dead\ntuple cleanup in indexes, while nevertheless being an effective target\nfor index-only scans. There is actually no good reason to think that\none condition should preclude the other -- they may very well go\ntogether. You did say this yourself when talking about global indexes,\nbut there is no reason to think that it's limited to partitioning\ncases. The current \"ANALYZE dead_tuples statistics\" paradigm cannot\nrecognize when both conditions go together, even though I now think\nthat it's fairly common. I also like your idea here because it enables\na more qualitative approach, based on recent information for recently\nmodified blocks -- not whole-table statistics. Averages are\nnotoriously misleading.\n\n[1] https://github.com/pgsql-io/benchmarksql/pull/16\n-- \nPeter Geoghegan", "msg_date": "Wed, 15 Sep 2021 15:08:41 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Thu, Sep 16, 2021 at 7:09 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Apr 21, 2021 at 8:21 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Now, the reason for this is that when we discover dead TIDs, we only\n> > record them in memory, not on disk. So, as soon as VACUUM ends, we\n> > lose all knowledge of whether those TIDs were and must rediscover\n> > them. Suppose we didn't do this, and instead had a \"dead TID\" fork for\n> > each table. Suppose further that this worked like a conveyor belt,\n> > similar to WAL, where every dead TID we store into the fork is\n> > assigned an identifying 64-bit number that is never reused.\n>\n> Enabling index-only scans is a good enough reason to pursue this\n> project, even on its own.\n\n+1\n\n> I attached a POC autovacuum logging instrumentation patch that shows\n> how VACUUM uses *and* sets VM bits.\n\nLogging how vacuum uses and sets VM bits seems a good idea.\n\nI've read the proposed PoC patch. Probably it's better to start a new\nthread for this patch and write the comment for it there but let me\nleave one comment on the patch:\n\nWith the patch, we increment allfrozen_pages counter, which is used to\ndetermine whether or not we advance relfrozenxid and relminmxid, at\ntwo places:\n\n@@ -1141,7 +1201,9 @@ lazy_scan_heap(LVRelState *vacrel, VacuumParams\n*params, bool aggressive)\n * in this case an approximate answer is OK.\n */\n if (aggressive ||\nVM_ALL_FROZEN(vacrel->rel, blkno, &vmbuffer))\n- vacrel->frozenskipped_pages++;\n+ vacrel->allfrozen_pages++;\n+ else\n+ vacrel->allvisible_pages++;\n continue;\n\n@@ -1338,6 +1400,8 @@ lazy_scan_heap(LVRelState *vacrel, VacuumParams\n*params, bool aggressive)\n */\n if (!PageIsAllVisible(page))\n {\n+ vacrel->allfrozen_pages++;\n+\n\nI think that we will end up doubly counting the page as scanned_pages\nand allfrozen_pages due to the newly added latter change. This seems\nwrong to me because we calculate as follows:\n\n@@ -644,7 +656,7 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,\n * NB: We need to check this before truncating the relation,\nbecause that\n * will change ->rel_pages.\n */\n- if ((vacrel->scanned_pages + vacrel->frozenskipped_pages)\n+ if ((vacrel->scanned_pages + vacrel->allfrozen_pages)\n < vacrel->rel_pages)\n {\n Assert(!aggressive);\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 24 Sep 2021 14:41:35 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Wed, Sep 15, 2021 at 6:08 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Have you started any work on this project? I think that it's a very good idea.\n\nActually, I have. I've been focusing on trying to create a general\ninfrastructure for conveyor belt storage. An incomplete and likely\nquite buggy version of this can be found here:\n\nhttps://git.postgresql.org/gitweb/?p=users/rhaas/postgres.git;a=shortlog;h=refs/heads/conveyor\n\nMark Dilger has been helping me debug it, but it's still very early\ndays. I was planning to wait until it was a little more baked before\nposting it to the list, but since you asked...\n\nOnce that infrastructure is sufficiently mature, then the next step, I\nthink, would be to try to use it to store dead TIDs.\n\nAnd then after that, one has to think about how autovacuum scheduling\nought to work in a world where table vacuuming and index vacuuming are\ndecoupled.\n\nThis is a very hard problem, and I don't expect to solve it quickly. I\ndo hope to keep plugging away at it, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 24 Sep 2021 14:48:40 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Thu, Sep 23, 2021 at 10:42 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> On Thu, Sep 16, 2021 at 7:09 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Enabling index-only scans is a good enough reason to pursue this\n> > project, even on its own.\n>\n> +1\n\nI was hoping that you might be able to work on opportunistically\nfreezing whole pages for Postgres 15. I think that it would make sense\nto opportunistically make a page that is about to become all_visible\nduring VACUUM become all_frozen instead. Our goal is to make most\npages skip all_visible, and go straight to all_frozen directly. Often\nthe page won't need to be dirtied again, ever.\n\nRight now freezing is something that we mostly just think about as\noccurring at the level of tuples, which doesn't seem ideal. This seems\nrelated to Robert's project because both projects are connected to the\nquestion of how autovacuum scheduling works in general. We will\nprobably need to rethink things like the vacuum_freeze_min_age GUC. (I\nalso think that we might need to reconsider how\naggressive/anti-wraparound VACUUMs work, but that's another story.)\n\nObviously this is a case of performing work eagerly; a form of\nspeculation that tries to lower costs in the aggregate, over time.\nHeuristics that work well on average seem possible, but even excellent\nheuristics could be wrong -- in the end we're trying to predict the\nfuture, which is inherently impossible to do reliably for all\nworkloads. I think that that will be okay, provided that the cost of\nbeing wrong is kept low and *fixed* (the exact definition of \"fixed\"\nwill need to be defined, but the basic idea is that any regression is\nonce per page, not once per page per VACUUM or something).\n\nOnce it's much cheaper enough to freeze a whole page early (i.e. all\ntuple headers from all tuples), then the implementation can be wrong\n95%+ of the time, and maybe we'll still win by a lot. That may sound\nbad, until you realize that it's 95% *per VACUUM* -- the entire\nsituation is much better once you think about the picture for the\nentire table over time and across many different VACUUM operations,\nand once you think about FPIs in the WAL stream. We'll be paying the\ncost of freezing in smaller and more predictable increments, too,\nwhich can make the whole system more robust. Many pages that all go\nfrom all_visible to all_frozen at the same time (just because they\ncrossed some usually-meaningless XID-based threshold) is actually\nquite risky (this is why I mentioned aggressive VACUUMs in passing).\n\nThe hard part is getting the cost way down. lazy_scan_prune() uses\nxl_heap_freeze_tuple records for each tuple it freezes. These\nobviously have a lot of redundancy across tuples from the same page in\npractice. And the WAL overhead is much larger just because these are\nper-tuple records, not per-page records. Getting the cost down is hard\nbecause of issues with MultiXacts, freezing xmin but not freezing xmax\nat the same time, etc.\n\n> Logging how vacuum uses and sets VM bits seems a good idea.\n\n> I think that we will end up doubly counting the page as scanned_pages\n> and allfrozen_pages due to the newly added latter change. This seems\n> wrong to me because we calculate as follows:\n\nI agree that that's buggy. Oops.\n\nIt was just a prototype that I wrote for my own work. I do think that\nwe should have a patch that has some of this, for users, but I am not\nsure about the details just yet. This is probably too much information\nfor users, but I think it will take me more time to decide what really\ndoes matter to users.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 24 Sep 2021 18:17:21 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Fri, Sep 24, 2021 at 11:48 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Actually, I have. I've been focusing on trying to create a general\n> infrastructure for conveyor belt storage. An incomplete and likely\n> quite buggy version of this can be found here:\n>\n> https://git.postgresql.org/gitweb/?p=users/rhaas/postgres.git;a=shortlog;h=refs/heads/conveyor\n\nThat's great news! I think that this is the right high level direction.\n\n> Mark Dilger has been helping me debug it, but it's still very early\n> days. I was planning to wait until it was a little more baked before\n> posting it to the list, but since you asked...\n\nReminds me of my FSM patch, in a way. It's ambitious, and still very\nrough, but maybe I should bite the bullet and post it as a POC soon.\n\n> Once that infrastructure is sufficiently mature, then the next step, I\n> think, would be to try to use it to store dead TIDs.\n\n+1.\n\n> And then after that, one has to think about how autovacuum scheduling\n> ought to work in a world where table vacuuming and index vacuuming are\n> decoupled.\n\nI'm excited about the possibility of using this infrastructure as a\nspringboard for driving autovacuum's behavior using more or less\nauthoritative information, rather than dubious statistics that can\nconsistently lead us down the wrong path. ANALYZE style statistics are\nsomething that can only work under specific conditions that take their\nobvious limitations into account -- and even there (even within the\noptimizer) it's amazing that they work as well as they do. I fear that\nwe assumed that the statistics driving autovacuum were good enough at\nsome point in the distant past, and never really validated that\nassumption. Perhaps because anti-wraparound VACUUM was *accidentally*\nprotective.\n\nThe scheduling of autovacuum is itself a big problem for the two big\nBenchmarkSQL tables I'm always going on about -- though it did get a\nlot better with the introduction of the\nautovacuum_vacuum_insert_scale_factor stuff in Postgres 13. I recently\nnoticed that the tables have *every* autovacuum driven by inserts\n(i.e. by the new autovacuum_vacuum_scale_factor stuff), and never by\nupdates -- even though updates obviously produce significant bloat in\nthe two tables. BenchmarkSQL on Postgres was far worse than it is now\na few releases ago [1], and I think that this stats business was a big\nfactor (on top of everything else). I can clearly see that\nautovacuum_vacuum_scale_factor is certainly accidentally protective\nwith BenchmarkSQL today, in a way that wasn't particularly anticipated\nby anybody.\n\nThe fact that the intellectual justifications for a lot of these\nthings are so vague concerns me. For example, why do we apply\nautovacuum_vacuum_scale_factor based on reltuples at the end of the\nlast VACUUM? That aspect of the design will make much less sense once\nwe have this decoupling in place. Even with the happy accident of\nautovacuum_vacuum_insert_scale_factor helping BenchmarkSQL, the\nconventional dead tuples based approach to VACUUM still doesn't drive\nautovacuum sensibly -- we still systematically undercount LP_DEAD\nstubs because (with this workload) they're systemically concentrated\nin relatively few heap pages. So if this was a real app, the DBA would\nsomehow have to work out that they should aggressively tune\nautovacuum_vacuum_scale_factor to clean up bloat from updates. I doubt\nany DBA could ever figure that out, because it doesn't make any sense.\n\nThe problem goes both ways: in addition to undercounting dead tuples,\nwe effectively overcount, which can lead to autovacuum chasing its own\ntail [2].\n\nI think that we could do *way* better than we do today without\nenormous effort, and I think that it really matters. Maybe we could\nselect from a few standard models for autovacuum scheduling using\nBayesian inference -- converge on the more predictive model for a\ngiven table over time, using actual outcomes for each autovacuum. Be\nsensitive to how LP_DEAD stub line pointers can become concentrated in\nrelatively few heap pages, and stuff like that. Maybe keep a little\nhistory to work off of. The problem with the current model is not that\nit might be wrong. The problem is that it might *never* be right (for\na given table). The scheduling never learns any lessons, because it's\nfundamentally static -- it ought to be dynamic. How things change is\nmuch more informative than where things are at an arbitrary point in\ntime.\n\n[1] https://www.postgresql.org/message-id/flat/0265f9e2-3e32-e67d-f106-8abde596c0e4%40commandprompt.com\n[2] https://postgr.es/m/CAH2-Wz=sJm3tm+FpXbyBhEhX5tbz1trQrhG6eOhYk4-+5uL=ww@mail.gmail.com\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 24 Sep 2021 19:44:15 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Fri, Sep 24, 2021 at 7:44 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> The scheduling of autovacuum is itself a big problem for the two big\n> BenchmarkSQL tables I'm always going on about -- though it did get a\n> lot better with the introduction of the\n> autovacuum_vacuum_insert_scale_factor stuff in Postgres 13. I recently\n> noticed that the tables have *every* autovacuum driven by inserts\n> (i.e. by the new autovacuum_vacuum_scale_factor stuff), and never by\n> updates -- even though updates obviously produce significant bloat in\n> the two tables. BenchmarkSQL on Postgres was far worse than it is now\n> a few releases ago [1], and I think that this stats business was a big\n> factor (on top of everything else). I can clearly see that\n> autovacuum_vacuum_scale_factor is certainly accidentally protective\n> with BenchmarkSQL today, in a way that wasn't particularly anticipated\n> by anybody.\n\n> So if this was a real app, the DBA would\n> somehow have to work out that they should aggressively tune\n> autovacuum_vacuum_scale_factor to clean up bloat from updates. I doubt\n> any DBA could ever figure that out, because it doesn't make any sense.\n\nCorrection: I meant that the autovacuum_vacuum_insert_scale_factor GUC is\naccidentally protective with the BenchmarkSQL tables, and that no DBA\ncould be expected to figure this out. That is, it helps to lower\nautovacuum_vacuum_insert_scale_factor from its default of 0.20, just\nto get autovacuum to better handle bloat from *updates*. This has\nnothing to do with inserts, or with freeze or set VM bits -- and so\noverall it doesn't make any sense.\n\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 24 Sep 2021 20:08:19 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Sat, Sep 25, 2021 at 10:17 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Thu, Sep 23, 2021 at 10:42 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > On Thu, Sep 16, 2021 at 7:09 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > > Enabling index-only scans is a good enough reason to pursue this\n> > > project, even on its own.\n> >\n> > +1\n>\n> I was hoping that you might be able to work on opportunistically\n> freezing whole pages for Postgres 15. I think that it would make sense\n> to opportunistically make a page that is about to become all_visible\n> during VACUUM become all_frozen instead. Our goal is to make most\n> pages skip all_visible, and go straight to all_frozen directly. Often\n> the page won't need to be dirtied again, ever.\n\n+1. I'm happy to work on this.\n\nThere was a similar proposal before[1]; if we freeze even one tuple in\na page, we freeze all tuples in a page and set the page as all-frozen\nif all tuples in the page can be frozen. This is also a good approach.\n\n> The hard part is getting the cost way down. lazy_scan_prune() uses\n> xl_heap_freeze_tuple records for each tuple it freezes. These\n> obviously have a lot of redundancy across tuples from the same page in\n> practice. And the WAL overhead is much larger just because these are\n> per-tuple records, not per-page records.\n\nxl_heap_freeze_page includes multiple xl_heap_freeze_tuple data but we\nwrite XLOG_HEAP2_FREEZE_PAGE WAL record per pages? What the WAL\noverhead did you refer to?\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CANP8%2Bj%2BEfLZMux6KLvb%2BumdeVYc%2BJZs5ReNSFq9WDLn%2BAKnhkg%40mail.gmail.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 27 Sep 2021 12:17:37 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Mon, Sep 27, 2021 at 8:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\nHi,\n\nHere is the first WIP patch for the decoupling table and index vacuum.\nThe first mail of the thread has already explained the complete\nbackground of why we want to do this so instead of describing that I\nwill directly jump into explaining what these patches do.\n\nCurrently, the table vacuum and index vacuum are executed as a single\noperation. Basically, we vacuum the table by performing hot pruning\nand remembering dead items in the cache and then we perform the index\nvacuum and perform the second pass of the heap vacuum under which we\nmark items unused.\n\nIn this patch, we make these multiple vacuum passes as independent\noperations. So the idea is that we provide multiple vacuum options\nunder that, the user can perform the independent operations i.e.\n\"VACUUM (heap_hot_prune) tbl_name\" for performing just the hot prune\n(first vacuum) pass, \"VACUUM (heap_vacuum) tbl_name\" for the second\nheap pass to set dead item unused for which index vacuum is done. And\nadditionally, we are now allowing users to just perform the index\nvacuum i.e. \"VACUUM idx_name\".\n\nSo under the heap_hot_prune pass, we will generate the dead tids and\ninstead of directly performing the index vacuum we will flush those\ndead tids into the conveyor belt using Deadtidstore interfaces. Then\nin the index pass, we will read the data from the conveyor belt and\nperform the index vacuum and at last, in the heap_vacuum pass, we will\nread the data from the conveyor belt and mark all dead items unused.\nHowever, in the second pass, we can only mark those items unused which\nare dead, and for which all the indexes for the table are already\nvacuumed. So for identifying that in the pg_class entry we store the\nconveyor belt pageno up to which we have already done the index vacuum\nfor the index related entry and we have already done the heap_vacuum\npass for the table related entry. Additionally while doing the\nhot_prune pass we also check if the item is already dead and index\nvacuum is also done for that then we directly set it unused, for this,\nwe use Deadtidstore interfaces.\n\nDeadtidstore provides interfaces over the conveyor belt for storing\nand retrieving dead tids into the conveyor belt. This module\nmaintains a DeadTidState which keeps track of the current insertion\nprogress i.e the first and the last conveyor belt page for the current\nvacuum run. And on the completion of the vacuum run, this takes care\nof setting the complete vacuum run bound by storing the last conveyor\nbelt pageno of the current vacuum run into the special space of the\nfirst conveyor belt page for this run. This also provides the\ninfrastructure to avoid adding duplicate tids into the conveyor belt.\nBasically, if we perform the first vacuum pass multiple times without\nexecuting the second vacuum pass then it is possible that we encounter\nthe same dead tids in the conveyor belt so this module maintains a\ncache over the conveyor belt such that it only loads the data into the\ncache w.r.t the current block the vacuum is processing so we don't\nneed to maintain a huge cache.\n\nTest example:\n\nCREATE TABLE t (a int);\nCREATE INDEX idx on t(a);\nINSERT INTO t VALUES (generate_series(1,1000000));\nDELETE FROM t where a > 300;\nVACUUM (heap_hot_prune) t;\nVACUUM idx;\n\"VACUUM (heap_vacuum) t;\n\nTODO:\n- This is just a POC patch to discuss the design idea and needs a lot\nof improvement and testing.\n- We are using a slightly different format for storing the dead tids\ninto the conveyor belt which is explained in the patch but the\ntraditional multi-pass vacuum is still using the same format (array of\nItemPointeData), so we need to unify that format.\n- Performance testing.\n- Cleaner interfaces so that we can easily be integrated with auto\nvacuum, currently, this is not provided for the manual vacuum.\n- Add test cases.\n\nPatches can be applied on the latest conveyor belt patches[1]\n\n[1] https://www.postgresql.org/message-id/CAFiTN-sQUddO9JPiH3tz%2BvbNqRqi_pgndecy8k2yXAnO3ymqZA%40mail.gmail.com\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 26 Jan 2022 19:28:08 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Wed, Jan 26, 2022 at 8:58 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> TODO:\n> - This is just a POC patch to discuss the design idea and needs a lot\n> of improvement and testing.\n> - We are using a slightly different format for storing the dead tids\n> into the conveyor belt which is explained in the patch but the\n> traditional multi-pass vacuum is still using the same format (array of\n> ItemPointeData), so we need to unify that format.\n> - Performance testing.\n> - Cleaner interfaces so that we can easily be integrated with auto\n> vacuum, currently, this is not provided for the manual vacuum.\n> - Add test cases.\n\nI think this is a pretty interesting piece of work. I appreciate the\neffort you've obviously put into the comments, although I do think\nsome of them are going to need some additional clarification. But I\nthink the bigger questions here at the moment are things like (1) Is\nthis the right idea? and if not (2) How could we change it to make it\nbetter? and (3) Is there any way that we can make it simpler? It was\nthe last of these questions that prompted me to post\nhttp://postgr.es/m/CA+TgmoY18RzQqDm2jE2WDkiA8ngTEDHp7uLtHb3a-ABs+wbY_g@mail.gmail.com\nbecause, if that thought were to work out, then we could have more\nthings in common between the conveyor-belt and non-conveyor-belt\ncases, and we might be able to start with some preliminary work to\njigger more things in to the second phase, and then look to integrate\nthe conveyor belt stuff separately.\n\nI think what we ought to do at this point is try to figure out some\ntests that might show how well this approach actually works in\npractice. Now one motivation for this work was the desire to someday\nhave global indexes, but those don't exist yet, so it makes sense to\nconsider other scenarios in which the patch might (or might not) be\nbeneficial. And it seems to me that we should be looking for a\nscenario where we have multiple indexes with different vacuuming\nneeds. How could that happen? Well, the first thing that occurred to\nme was a table with a partial index. If we have a column t whose\nvalues are randomly distributed between 1 and 10, and a partial index\non some other column WHERE t = 1, then the partial index should only\naccumulate dead tuples about 10% as fast as a non-partial index on the\nsame column. On the other hand, the partial index also has a much\nsmaller number of total rows, so after a fixed number of updates, the\npartial index should have the same *percentage* of dead tuples as the\nnon-partial index even though the absolute number is smaller. So maybe\nthat's not a great idea.\n\nMy second thought was that perhaps we can create a test scenario\nwhere, in one index, the deduplication and bottom-up index deletion\nand kill_prior_tuple mechanisms are very effective, and in another\nindex, it's not effective at all. For example, maybe index A is an\nindex on the primary key, and index B is a non-unique index on some\ncolumn that we're updating with ever-increasing values (so that we\nnever put new tuples into a page that could be productively cleaned\nup). I think what should happen in this case is that A should not grow\nin size even if it's never vacuumed, while B will require vacuuming to\nkeep the size down. If this example isn't exactly right, maybe we can\nconstruct one where that does happen. Then we could try to demonstrate\nthat with this patch we can do less vacuuming work and still keep up\nthan what would be possible without the patch. We'll either be able to\nshow that this is true, or we will see that it's false, or we won't be\nable to really see much difference. Any of those would be interesting\nfindings.\n\nOne thing we could try doing in order to make that easier would be:\ntweak things so that when autovacuum vacuums the table, it only\nvacuums the indexes if they meet some threshold for bloat. I'm not\nsure exactly what happens with the heap vacuuming then - do we do\nphases 1 and 2 always, or a combined heap pass, or what? But if we\npick some criteria that vacuums indexes sometimes and not other times,\nwe can probably start doing some meaningful measurement of whether\nthis patch is making bloat better or worse, and whether it's using\nfewer or more resources to do it.\n\nDo you have a git branch for this work?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 4 Feb 2022 13:15:27 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Fri, Feb 4, 2022 at 1:15 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> My second thought was that perhaps we can create a test scenario\n> where, in one index, the deduplication and bottom-up index deletion\n> and kill_prior_tuple mechanisms are very effective, and in another\n> index, it's not effective at all. For example, maybe index A is an\n> index on the primary key, and index B is a non-unique index on some\n> column that we're updating with ever-increasing values (so that we\n> never put new tuples into a page that could be productively cleaned\n> up). I think what should happen in this case is that A should not grow\n> in size even if it's never vacuumed, while B will require vacuuming to\n> keep the size down.\n\nThat should work. All you need is a table with several indexes, and a\nworkload consisting of updates that modify a column that is the key\ncolumn for only one of the indexes. I would expect bottom-up index\ndeletion to be 100% effective for the not-logically-modified indexes,\nin the sense that there will be no page splits -- provided there are\nno long held snapshots, and provided that the index isn't very small.\nIf it is small (think of something like the pgbench_branches pkey),\nthen even the occasional ANALYZE will act as a \"long held snapshot\"\nrelative to the size of the index. And so then you might get one page\nsplit per original leaf page, but probably not a second, and very very\nprobably not a third.\n\nThe constantly modified index will be entirely dependent on index\nvacuuming here, and so an improved VACUUM design that allows that\nparticular index to be vacuumed more frequently could really improve\nperformance.\n\nBTW, it's a good idea to avoid unique indexes in test cases where\nthere is an index that you don't want to set LP_DEAD bits for, since\n_bt_check_unique() tends to do a good job of setting LP_DEAD bits,\nindependent of the kill_prior_tuple thing. You can avoid using\nkill_prior_tuple by forcing bitmap scans, of course.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 4 Feb 2022 13:46:06 -0500", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Fri, Feb 4, 2022 at 1:46 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> That should work. All you need is a table with several indexes, and a\n> workload consisting of updates that modify a column that is the key\n> column for only one of the indexes. I would expect bottom-up index\n> deletion to be 100% effective for the not-logically-modified indexes,\n> in the sense that there will be no page splits -- provided there are\n> no long held snapshots, and provided that the index isn't very small.\n> If it is small (think of something like the pgbench_branches pkey),\n> then even the occasional ANALYZE will act as a \"long held snapshot\"\n> relative to the size of the index. And so then you might get one page\n> split per original leaf page, but probably not a second, and very very\n> probably not a third.\n>\n> The constantly modified index will be entirely dependent on index\n> vacuuming here, and so an improved VACUUM design that allows that\n> particular index to be vacuumed more frequently could really improve\n> performance.\n\nThanks for checking my work here - I wasn't 100% sure I had the right idea.\n\n> BTW, it's a good idea to avoid unique indexes in test cases where\n> there is an index that you don't want to set LP_DEAD bits for, since\n> _bt_check_unique() tends to do a good job of setting LP_DEAD bits,\n> independent of the kill_prior_tuple thing. You can avoid using\n> kill_prior_tuple by forcing bitmap scans, of course.\n\nThanks for this tip, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 4 Feb 2022 13:54:45 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Fri, Feb 4, 2022 at 1:54 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > The constantly modified index will be entirely dependent on index\n> > vacuuming here, and so an improved VACUUM design that allows that\n> > particular index to be vacuumed more frequently could really improve\n> > performance.\n>\n> Thanks for checking my work here - I wasn't 100% sure I had the right idea.\n\nI should perhaps have emphasized individual leaf pages, rather than\ntotal index size. Presumably we only need to store so many extra\nversions per logical row at any one time, and we have a fair amount of\nfree space for extra versions on leaf pages. Typically 10%- 30% of the\nspace from the page (typical when it isn't already inevitable that the\npage will eventually split due to simple inserts). A traditional\nguarantee with B-Trees is that we get `ln(2)` space utilization with\nrandom insertions, which leaves just over 30% of the page free for\nlater updates -- that's where I got 30% from.\n\nThere is a complementary effect with deduplication, since that buys us\ntime before the page has to split, making it much more likely that the\nsplit will be avoided entirely. It's very nonlinear.\n\nAs I said, the competition between older snapshots and garbage\ncollection can still lead to version-driven page splits (especially\nwhen non-hot updates are concentrated in one part of the key space, or\none leaf page). But that's arguably a good thing -- it naturally\nrelieves contention. There are actually designs that artificially\nsplit B-Tree pages early [1], detecting concurrency control related\ncontention. Other systems need concurrency control in indexes, which\nwe avoid by having versions live in indexes.\n\n[1] http://cidrdb.org/cidr2021/papers/cidr2021_paper21.pdf\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 4 Feb 2022 16:25:37 -0500", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Fri, Feb 4, 2022 at 11:45 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Jan 26, 2022 at 8:58 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > TODO:\n> > - This is just a POC patch to discuss the design idea and needs a lot\n> > of improvement and testing.\n> > - We are using a slightly different format for storing the dead tids\n> > into the conveyor belt which is explained in the patch but the\n> > traditional multi-pass vacuum is still using the same format (array of\n> > ItemPointeData), so we need to unify that format.\n> > - Performance testing.\n> > - Cleaner interfaces so that we can easily be integrated with auto\n> > vacuum, currently, this is not provided for the manual vacuum.\n> > - Add test cases.\n>\n> I think this is a pretty interesting piece of work. I appreciate the\n> effort you've obviously put into the comments, although I do think\n> some of them are going to need some additional clarification. But I\n> think the bigger questions here at the moment are things like (1) Is\n> this the right idea? and if not (2) How could we change it to make it\n> better? and (3) Is there any way that we can make it simpler? It was\n> the last of these questions that prompted me to post\n> http://postgr.es/m/CA+TgmoY18RzQqDm2jE2WDkiA8ngTEDHp7uLtHb3a-ABs+wbY_g@mail.gmail.com\n> because, if that thought were to work out, then we could have more\n> things in common between the conveyor-belt and non-conveyor-belt\n> cases, and we might be able to start with some preliminary work to\n> jigger more things in to the second phase, and then look to integrate\n> the conveyor belt stuff separately.\n\nI agree that if we can do something like that then integrating the\nconveyor belt will be much cleaner.\n\n> My second thought was that perhaps we can create a test scenario\n> where, in one index, the deduplication and bottom-up index deletion\n> and kill_prior_tuple mechanisms are very effective, and in another\n> index, it's not effective at all. For example, maybe index A is an\n> index on the primary key, and index B is a non-unique index on some\n> column that we're updating with ever-increasing values (so that we\n> never put new tuples into a page that could be productively cleaned\n> up). I think what should happen in this case is that A should not grow\n> in size even if it's never vacuumed, while B will require vacuuming to\n> keep the size down. If this example isn't exactly right, maybe we can\n> construct one where that does happen. Then we could try to demonstrate\n> that with this patch we can do less vacuuming work and still keep up\n> than what would be possible without the patch. We'll either be able to\n> show that this is true, or we will see that it's false, or we won't be\n> able to really see much difference. Any of those would be interesting\n> findings.\n\n+1\n\n> One thing we could try doing in order to make that easier would be:\n> tweak things so that when autovacuum vacuums the table, it only\n> vacuums the indexes if they meet some threshold for bloat. I'm not\n> sure exactly what happens with the heap vacuuming then - do we do\n> phases 1 and 2 always, or a combined heap pass, or what? But if we\n> pick some criteria that vacuums indexes sometimes and not other times,\n> we can probably start doing some meaningful measurement of whether\n> this patch is making bloat better or worse, and whether it's using\n> fewer or more resources to do it.\n\nI think we can always trigger phase 1 and 2 and phase 2 will only\nvacuum conditionally based on if all the indexes are vacuumed for some\nconveyor belt pages so we don't have risk of scanning without marking\nanything unused. And we can try to measure with other approaches as\nwell where we completely avoid phase 2 and it will be done only along\nwith phase 1 whenever applicable.\n\n> Do you have a git branch for this work?\n\nYeah, my repository: https://github.com/dilipbalaut11/conveyor_test\nbranch: DecouplingIndexAndHeapVacuumUsingCB\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 7 Feb 2022 12:54:56 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Sun, Feb 6, 2022 at 11:25 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > One thing we could try doing in order to make that easier would be:\n> > tweak things so that when autovacuum vacuums the table, it only\n> > vacuums the indexes if they meet some threshold for bloat. I'm not\n> > sure exactly what happens with the heap vacuuming then - do we do\n> > phases 1 and 2 always, or a combined heap pass, or what? But if we\n> > pick some criteria that vacuums indexes sometimes and not other times,\n> > we can probably start doing some meaningful measurement of whether\n> > this patch is making bloat better or worse, and whether it's using\n> > fewer or more resources to do it.\n>\n> I think we can always trigger phase 1 and 2 and phase 2 will only\n> vacuum conditionally based on if all the indexes are vacuumed for some\n> conveyor belt pages so we don't have risk of scanning without marking\n> anything unused.\n\nNot sure what you mean about a risk of scanning without marking any\nLP_DEAD items as LP_UNUSED. If VACUUM always does some amount of this,\nthen it follows that the new mechanism added by the patch just can't\nsafely avoid any work at all, making it all pointless. We have to\nexpect heap vacuuming to take place much less often with the patch.\nSimply because that's what the invariant described in comments above\nlazy_scan_heap() requires.\n\nNote that this is not the same thing as saying that we do less\n*absolute* heap vacuuming with the conveyor belt -- my statement about\nless heap vacuuming taking place is *only* true relative to the amount\nof other work that happens in any individual \"shortened\" VACUUM\noperation. We could do exactly the same total amount of heap vacuuming\nas before (in a version of Postgres without the conveyor belt but with\nthe same settings), but much *more* index vacuuming (at least for one\nor two problematic indexes).\n\n> And we can try to measure with other approaches as\n> well where we completely avoid phase 2 and it will be done only along\n> with phase 1 whenever applicable.\n\nI believe that the main benefit of the dead TID conveyor belt (outside\nof global index use cases) will be to enable us to do more (much more)\nindex vacuuming for one index in particular. So it's not really about\ndoing less index vacuuming or less heap vacuuming -- it's about doing\na *greater* amount of *useful* index vacuuming, in less time. There is\noften some way in which failing to vacuum one index for a long time\ndoes lasting damage to the index structure.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 8 Feb 2022 09:12:19 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Tue, Feb 8, 2022 at 12:12 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I believe that the main benefit of the dead TID conveyor belt (outside\n> of global index use cases) will be to enable us to do more (much more)\n> index vacuuming for one index in particular. So it's not really about\n> doing less index vacuuming or less heap vacuuming -- it's about doing\n> a *greater* amount of *useful* index vacuuming, in less time. There is\n> often some way in which failing to vacuum one index for a long time\n> does lasting damage to the index structure.\n\nThis makes sense to me, and I think it's a good insight.\n\nIt's not clear to me that we have enough information to make good\ndecisions about which indexes to vacuum and which indexes to skip.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 8 Feb 2022 12:32:57 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Tue, Feb 8, 2022 at 9:33 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Tue, Feb 8, 2022 at 12:12 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I believe that the main benefit of the dead TID conveyor belt (outside\n> > of global index use cases) will be to enable us to do more (much more)\n> > index vacuuming for one index in particular. So it's not really about\n> > doing less index vacuuming or less heap vacuuming -- it's about doing\n> > a *greater* amount of *useful* index vacuuming, in less time. There is\n> > often some way in which failing to vacuum one index for a long time\n> > does lasting damage to the index structure.\n>\n> This makes sense to me, and I think it's a good insight.\n>\n> It's not clear to me that we have enough information to make good\n> decisions about which indexes to vacuum and which indexes to skip.\n\nWhat if \"extra vacuuming, not skipping vacuuming\" was not just an\nabstract goal, but an actual first-class part of the implementation,\nand the index AM API? Then the question we're asking the index/index\nAM is no longer \"Do you [an index] *not* require index vacuuming, even\nthough you are entitled to it according to the conventional rules of\nautovacuum scheduling?\". The question is instead more like \"Could you\nuse an extra, early VACUUM?\".\n\nif we invert the question like this then we have something that makes\nmore sense at the index AM level, but requires significant\nimprovements at the level of autovacuum scheduling. On the other hand\nI think that you already need to do at least some work in that area.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 8 Feb 2022 09:50:25 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Tue, Feb 8, 2022 at 12:50 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > It's not clear to me that we have enough information to make good\n> > decisions about which indexes to vacuum and which indexes to skip.\n>\n> What if \"extra vacuuming, not skipping vacuuming\" was not just an\n> abstract goal, but an actual first-class part of the implementation,\n> and the index AM API? Then the question we're asking the index/index\n> AM is no longer \"Do you [an index] *not* require index vacuuming, even\n> though you are entitled to it according to the conventional rules of\n> autovacuum scheduling?\". The question is instead more like \"Could you\n> use an extra, early VACUUM?\".\n>\n> if we invert the question like this then we have something that makes\n> more sense at the index AM level, but requires significant\n> improvements at the level of autovacuum scheduling. On the other hand\n> I think that you already need to do at least some work in that area.\n\nRight, that's why I asked the question. If we're going to ask the\nindex AM whether it would like to be vacuumed right now, we're going\nto have to put some logic into the index AM that knows how to answer\nthat question. But if we don't have any useful statistics that would\nlet us answer the question correctly, then we have problems.\n\nWhile I basically agree with everything that you just wrote, I'm\nsomewhat inclined to think that the question is not best phrased as\neither extra-vacuum or skip-a-vacuum. Either of those supposes a\nnormative amount of vacuuming from which we could deviate in one\ndirection or the other. I think it would be better to phrase it in a\nway that doesn't make such a supposition. Maybe something like: \"Hi,\nwe are vacuuming the heap right now and we are also going to vacuum\nany indexes that would like it, and does that include you?\"\n\nThe point is that it's a continuum. If we decide that we're asking the\nindex \"do you want extra vacuuming?\" then that phrasing suggests that\nyou should only say yes if you really need it. If we decide we're\nasking the index \"can we skip vacuuming you this time?\" then the\nphrasing suggests that you should not feel bad about insisting on a\nvacuum right now, and only surrender your claim if you're sure you\ndon't need it. But in reality, no bias either way is warranted. It is\neither better that this index should be vacuumed right now, or better\nthat it should not be vacuumed right now, and whichever is better\nshould be what we choose to do.\n\nTo expand on that just a bit, if I'm a btree index and someone asks me\n\"can we skip vacuuming you this time?\" I might say \"return dead_tups <\ntiny_amount\" and if they ask me \"do you want extra vacuuming\" I might\nsay \"return dead_tups > quite_large_amount\". But if they ask me\n\"should we vacuum you now?\" then I might say \"return dead_tups >\nmoderate_amount\" which feels like the correct thing here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 8 Feb 2022 13:58:11 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Tue, Feb 8, 2022 at 10:58 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Right, that's why I asked the question. If we're going to ask the\n> index AM whether it would like to be vacuumed right now, we're going\n> to have to put some logic into the index AM that knows how to answer\n> that question. But if we don't have any useful statistics that would\n> let us answer the question correctly, then we have problems.\n\nI have very little faith in the use of statistical sampling for\nanything involving vacuuming. In fact, I think that the current way in\nwhich ANALYZE counts dead tuples is a misapplication of statistics. It\nisn't even wrong. One of the things that I really like about this\nproject is that it can plausibly solve that problem by splitting up\nthe work of VACUUM, at low cost -- it's less top-down. Not only do you\nget the obvious benefits with preventing bloat; you also get\n*continual* feedback about the actual physical reality in the table\n(and indexes, to a lesser extent). As I said recently, right now the\nmore bloat we have, the more uncertainty about the total amount of\nbloat exists. We need to control both the bloat, and the uncertainty\nabout the bloat.\n\nThe basic high level idea behind how the optimizer uses statistics\ninvolves the assumption that *all* the rows in the table are\n*themselves* a sample taken from some larger distribution -- something\nfrom the real physical world (meeting this assumption is one reason\nwhy database/schema normalization really matters). And so on a good\nweek it probably won't matter too much to the optimizer if ANALYZE\ndoesn't run until the table size doubles (for a table that was already\nquite large). These are pretty delicate assumptions, that (from the\npoint of view of the optimizer) work out surprisingly well in\npractice.\n\nBloat just isn't like that. Dead tuples are fundamentally cyclic and\ndynamic in nature -- conventional statistics just won't work with\nsomething like that. Worst of all, the process that counts dead tuples\n(ANALYZE) is really an active participant in the system -- the whole\nentire purpose of even looking is to *reduce* the number of dead\ntuples by making an autovacuum run. That's deeply weird.\n\n> The point is that it's a continuum. If we decide that we're asking the\n> index \"do you want extra vacuuming?\" then that phrasing suggests that\n> you should only say yes if you really need it. If we decide we're\n> asking the index \"can we skip vacuuming you this time?\" then the\n> phrasing suggests that you should not feel bad about insisting on a\n> vacuum right now, and only surrender your claim if you're sure you\n> don't need it. But in reality, no bias either way is warranted.\n\nActually, I think that this particular bias *is* warranted. We should\nopenly and plainly be biased in the direction of causing the least\nharm. What's wrong with that? Having accurate information in not an\nintrinsic good. I even think that having more information can be\nstrictly worse, because you might actually believe it. Variance\nmatters a lot -- the bias/variance tradeoff is pretty fundamental\nhere.\n\nI'm also saying some of this stuff because of broader VACUUM design\nconsiderations. VACUUM fundamentally has to work at the table level,\nand I don't see that changing. The approach of making autovacuum do\nsomething akin to a plain VACUUM command in the simplest cases, and\nonly later some extra \"dynamic mini vacuums\" (that pick up where the\nVACUUM command style VACUUM left off) has a lot to recommend it. This\napproach allows most of the current autovacuum settings to continue to\nwork in roughly the same way. They just need to have their\ndocumentation updated to make it clear that they're about the worst\ncase.\n\n> To expand on that just a bit, if I'm a btree index and someone asks me\n> \"can we skip vacuuming you this time?\" I might say \"return dead_tups <\n> tiny_amount\" and if they ask me \"do you want extra vacuuming\" I might\n> say \"return dead_tups > quite_large_amount\". But if they ask me\n> \"should we vacuum you now?\" then I might say \"return dead_tups >\n> moderate_amount\" which feels like the correct thing here.\n\nThe btree side of this shouldn't care at all about dead tuples (in\ngeneral we focus way too much on dead tuples, and way too little on\npages). With bottom-up index deletion the number of dead tuples in the\nindex is just about completely irrelevant. It's entirely possible and\noften even likely that 20%+ of all index tuples will be dead at any\none time, when the optimization perfectly preserves the index\nstructure.\n\nThe btree side of the index AM API should be focussing on the growth\nin index size, relative to some expectation (like maybe the growth for\nwhatever index on the same table has grown the least since last time,\naccounting for obvious special cases like partial indexes). Perhaps\nwe'd give some consideration to bulk deletes, too. Overall, it should\nbe pretty simple, and should sometimes force us to do one of these\n\"dynamic mini vacuums\" of the index just because we're not quite sure\nwhat to do. There is nothing wrong with admitting the uncertainty.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 8 Feb 2022 11:51:18 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Tue, Feb 8, 2022 at 10:42 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Sun, Feb 6, 2022 at 11:25 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > One thing we could try doing in order to make that easier would be:\n> > > tweak things so that when autovacuum vacuums the table, it only\n> > > vacuums the indexes if they meet some threshold for bloat. I'm not\n> > > sure exactly what happens with the heap vacuuming then - do we do\n> > > phases 1 and 2 always, or a combined heap pass, or what? But if we\n> > > pick some criteria that vacuums indexes sometimes and not other times,\n> > > we can probably start doing some meaningful measurement of whether\n> > > this patch is making bloat better or worse, and whether it's using\n> > > fewer or more resources to do it.\n> >\n> > I think we can always trigger phase 1 and 2 and phase 2 will only\n> > vacuum conditionally based on if all the indexes are vacuumed for some\n> > conveyor belt pages so we don't have risk of scanning without marking\n> > anything unused.\n>\n> Not sure what you mean about a risk of scanning without marking any\n> LP_DEAD items as LP_UNUSED.\n\nI mean for testing purposes if we integrate with autovacuum such that,\n1) always do the first pass of the vacuum 2) index vacuum will be done\nonly for the indexes which have bloated more than some threshold and\nthen 3) we can always trigger the heap vacuum second pass. So my\npoint was even if from autovacuum we trigger the second vacuum pass\nevery time it will not do anything if all the indexes are not\nvacuumed.\n\nIf VACUUM always does some amount of this,\n> then it follows that the new mechanism added by the patch just can't\n> safely avoid any work at all, making it all pointless. We have to\n> expect heap vacuuming to take place much less often with the patch.\n> Simply because that's what the invariant described in comments above\n> lazy_scan_heap() requires.\n\nIn the second pass we are making sure that we don't mark any LP_DEAD\nto LP_UNUSED for which index vacuum is not done. Basically we are\nstoring dead items in the conveyor belt and whenever we do the index\npass we remember upto which conveyor belt page index vacuum is done.\nAnd before starting the heap second pass we will find the minimum\nconveyor belt page upto which all the indexes have been vacuumed.\n\n> Note that this is not the same thing as saying that we do less\n> *absolute* heap vacuuming with the conveyor belt -- my statement about\n> less heap vacuuming taking place is *only* true relative to the amount\n> of other work that happens in any individual \"shortened\" VACUUM\n> operation. We could do exactly the same total amount of heap vacuuming\n> as before (in a version of Postgres without the conveyor belt but with\n> the same settings), but much *more* index vacuuming (at least for one\n> or two problematic indexes).\n>\n> > And we can try to measure with other approaches as\n> > well where we completely avoid phase 2 and it will be done only along\n> > with phase 1 whenever applicable.\n>\n> I believe that the main benefit of the dead TID conveyor belt (outside\n> of global index use cases) will be to enable us to do more (much more)\n> index vacuuming for one index in particular. So it's not really about\n> doing less index vacuuming or less heap vacuuming -- it's about doing\n> a *greater* amount of *useful* index vacuuming, in less time. There is\n> often some way in which failing to vacuum one index for a long time\n> does lasting damage to the index structure.\n\nI agree with the point.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Feb 2022 11:22:20 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Wed, Feb 9, 2022 at 1:21 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n\n> The btree side of this shouldn't care at all about dead tuples (in\n> general we focus way too much on dead tuples, and way too little on\n> pages). With bottom-up index deletion the number of dead tuples in the\n> index is just about completely irrelevant. It's entirely possible and\n> often even likely that 20%+ of all index tuples will be dead at any\n> one time, when the optimization perfectly preserves the index\n> structure.\n>\n> The btree side of the index AM API should be focussing on the growth\n> in index size, relative to some expectation (like maybe the growth for\n> whatever index on the same table has grown the least since last time,\n> accounting for obvious special cases like partial indexes). Perhaps\n> we'd give some consideration to bulk deletes, too. Overall, it should\n> be pretty simple, and should sometimes force us to do one of these\n> \"dynamic mini vacuums\" of the index just because we're not quite sure\n> what to do. There is nothing wrong with admitting the uncertainty.\n\nI agree with the point that we should be focusing more on index size\ngrowth compared to dead tuples. But I don't think that we can\ncompletely ignore the number of dead tuples. Although we have the\nbottom-up index deletion but whether the index structure will be\npreserved or not will depend upon what keys we are inserting next. So\nfor example if there are 80% dead tuples but so far index size is fine\nthen can we avoid vacuum? If we avoid vacuuming then it is very much\npossible that in some cases we will create a huge bloat e.g. if we are\ninserting some keys which can not take advantage of bottom up\ndeletion. So IMHO the decision should be a combination of index size\nbloat and % dead tuples. Maybe we can add more weight to the size\nbloat and less weight to % dead tuple but we should not completely\nignore it.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Feb 2022 11:48:00 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Wed, Feb 9, 2022 at 1:18 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I agree with the point that we should be focusing more on index size\n> growth compared to dead tuples. But I don't think that we can\n> completely ignore the number of dead tuples. Although we have the\n> bottom-up index deletion but whether the index structure will be\n> preserved or not will depend upon what keys we are inserting next. So\n> for example if there are 80% dead tuples but so far index size is fine\n> then can we avoid vacuum? If we avoid vacuuming then it is very much\n> possible that in some cases we will create a huge bloat e.g. if we are\n> inserting some keys which can not take advantage of bottom up\n> deletion. So IMHO the decision should be a combination of index size\n> bloat and % dead tuples. Maybe we can add more weight to the size\n> bloat and less weight to % dead tuple but we should not completely\n> ignore it.\n\nI think that dead index tuples really don't matter if they're going to\nget removed anyway before a page split happens. In particular, if\nwe're going to do a bottom-up index deletion pass before splitting the\npage, then who cares if there are going to be dead tuples around until\nthen? You might think that they'd have the unfortunate effect of\nslowing down scans, and they could slow down ONE scan, but if they do,\nthen I think kill_prior_tuple will hint them dead and they won't\nmatter any more. Now, if we have a page that is going to split,\nbecause it's going to receive inserts but neither kill_prior_tuple nor\nbottom-up index deletion are going to keep us out of trouble, then the\ndead tuples matter. And if we have a page where all the tuples are\ndead and no further inserts are ever going to happen, those dead\ntuples also matter, because getting rid of them would let us recycle\nthe page.\n\nJust to be clear, when I say that the dead index tuples don't matter\nhere, I mean from the point of view of the index. From the point of\nview of the table, the presence of dead index tuples (or even the\npotential presence of dead tuples) pointing to dead line pointers is\nan issue that can drive heap bloat. But from the point of view of the\nindex, because we don't ever merge sibling index pages, and because we\nhave kill_prior_tuple, there's not much value in freeing up space in\nindex pages unless it either prevents a split or lets us free the\nwhole page. So I agree with Peter that index growth is what really\nmatters.\n\nHowever, I have a concern that Peter's idea to use the previous index\ngrowth to drive future index vacuuming distinction is retrospective\nrather than prospective. If the index is growing more than it should\nbased on the data volume, then evidently we didn't do enough vacuuming\nat some point in the past. It's reasonable to step up our efforts in\nthe present to make sure that the problem doesn't continue, but in\nsome sense it's already too late. What we would really like is a\nmeasure that answers the question: is the index going to bloat in the\nrelatively near future if we don't vacuum it now? I think that the\ndead tuple count is trying, however imperfectly, to figure that out.\nAll other things being equal, the more dead tuples there are in the\nindex, the more bloat we're going to have later if we don't clean them\nout now.\n\nThe problem is not with that core idea, which IMHO is actually good,\nbut that all other things are NOT equal. Peter has shown pretty\nconvincingly that in some workloads, essentially 100% of dead tuples\nare going to get removed without causing a page split and the index\ngrowth will be 0, whereas in other workloads 0% of dead tuples are\ngoing to get removed without causing index growth. If you knew that\nyou had the second case, then counting dead index tuples to decide\nwhen to vacuum would, in my opinion, be a very sensible thing to do.\nIt would still not be perfect, because dead tuples in pages that are\ngoing to get split are a lot worse than dead tuples in pages that\naren't going to be split, but it doesn't seem meaningless. However, if\nall of the index tuples are going to get removed in a timely fashion\nanyway, then it's as useful as a stopped clock: it will be right\nwhenever it says the index doesn't need to be vacuumed, and wrong when\nit says anything else.\n\nIn a certain sense, bottom-up index deletion may have exacerbated the\nproblems in this area. The more ways we add to remove dead tuples from\nindexes without vacuum, the less useful dead tuples will become as a\npredictor of index growth. Maybe #-of-dead-tuples and\nfuture-index-growth weren't that tightly coupled even before bottom-up\nindex deletion, but it must be worse now.\n\nI'm not hung up on using the # of dead tuples specifically as the\nmetric for index vacuuming, and it may be best to pick some other\nmeasure. But I am a little suspicious that if the only measure is past\nindex growth, we will let some situations go too far before we wake up\nand do something about them. My intuition is that it would be a good\nidea to come up with something we could measure, even if it's\nimperfect, that would give us some clue that trouble is brewing before\npages actually start splitting. Now maybe my intuition is wrong and\nthere is nothing better, but I think it's worth a thought.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Feb 2022 09:13:32 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Wed, Feb 9, 2022 at 6:13 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Just to be clear, when I say that the dead index tuples don't matter\n> here, I mean from the point of view of the index. From the point of\n> view of the table, the presence of dead index tuples (or even the\n> potential presence of dead tuples) pointing to dead line pointers is\n> an issue that can drive heap bloat. But from the point of view of the\n> index, because we don't ever merge sibling index pages, and because we\n> have kill_prior_tuple, there's not much value in freeing up space in\n> index pages unless it either prevents a split or lets us free the\n> whole page. So I agree with Peter that index growth is what really\n> matters.\n\nOne small caveat that I'd add is this: heap fragmentation likely makes\nit harder to avoid page splits in indexes, to a degree. It is arguably\none cause of the page splits that do happen in a table like\npgbench_tellers, with standard pgbench (and lots of throughput, lots\nof clients). The tellers (and also branches) primary key tends to\ndouble in size in my recent tests (still way better than a 20x\nincrease in size, which is what happened in Postgres 11 and maybe even\n13). I think that it might be possible to perfectly preserve the\noriginal index size (even with ANALYZE running now and again) by\nsetting heap fill factor very low, maybe 50 or less.\n\nThis is a minor quibble, though. It still makes sense to think of heap\nfragmentation as a problem for the heap itself, and not for indexes at\nall, since the effect I describe is relatively insignificant, and just\nabout impossible to model. The problem really is that the heap pages\nare failing to hold their original logical rows in place -- the index\nsize issue is more of a symptom than a problem unto itself.\n\n> However, I have a concern that Peter's idea to use the previous index\n> growth to drive future index vacuuming distinction is retrospective\n> rather than prospective. If the index is growing more than it should\n> based on the data volume, then evidently we didn't do enough vacuuming\n> at some point in the past.\n\nThat's a very valid concern. As you know, the great advantage about\nretrospectively considering what's not going well (and reducing\neverything to some highly informative measure like growth in index\nsize) is that it's reliable -- you don't have to understand precisely\nhow things got that way, which is just too complicated to get right.\nAnd as you point out, the great disadvantage is that it has already\nhappened -- which might already be too late.\n\nMore on that later...\n\n> It's reasonable to step up our efforts in\n> the present to make sure that the problem doesn't continue, but in\n> some sense it's already too late. What we would really like is a\n> measure that answers the question: is the index going to bloat in the\n> relatively near future if we don't vacuum it now? I think that the\n> dead tuple count is trying, however imperfectly, to figure that out.\n> All other things being equal, the more dead tuples there are in the\n> index, the more bloat we're going to have later if we don't clean them\n> out now.\n\nOne of the key intuitions behind bottom-up index deletion is to treat\nthe state of an index page as a dynamic thing, not a static thing. The\ninformation that we take from the page that drives our decisions is\nvery reliable on average, over time, in the aggregate. At the same\ntime, the information is very noisy, and could be wrong in important\nways at just about any time. The fundamental idea was to take\nadvantage of the first property, without ever getting killed by the\nsecond property.\n\nTo me this seems conceptually similar to how one manages risk when\nplaying a game of chance while applying the Kelly criterion. The\ncriterion provides probabilistic certainty on the best strategy to use\nin a situation where we have known favorable odds, an infinite series\nof bets, and a personal bankroll. I recently came across a great blog\npost about it, which gets the idea across well:\n\nhttps://explore.paulbutler.org/bet/\n\nIf you scroll down to the bottom of the page, there are some general\nconclusions, some of which are pretty counterintuitive. Especially\n\"Maximizing expected value can lead to a bad long-run strategy\".\nGrowth over time is much more important than anything else, since you\ncan play as many individual games as you like -- provided you never go\nbankrupt. I think that *a lot* of problems can be usefully analyzed\nthat way, or at least benefit from a general awareness of certain\nparadoxical aspects of probability. Bankruptcy must be recognized as\nqualitatively different to any non-zero bankroll. A little bit like\nhow splitting a leaf page unnecessarily is truly special.\n\nOnce we simply avoid ruin, we can get the benefit of playing a game\nwith favorable odds -- growth over time is what matters, not\nnecessarily our current bankroll. It sounds like a fairly small\ndifference, but that's deceptive -- it's a huge difference.\n\n> The problem is not with that core idea, which IMHO is actually good,\n> but that all other things are NOT equal.\n\n...now to get back to talking about VACUUM itself, and to this project.\n\nI couldn't agree more -- all other things are NOT equal. We need a way\nto manage the risk that things could change quickly when an index that\nwe believe has many dead tuples hasn't grown at all just yet. It's\nprobably also true that we should try to predict the near future, and\nnot 100% rely on the fact that what we've been doing seems to have\nworked so far -- I do accept that.\n\nWe should probably dispense with the idea that we'll be making these\ndecisions about what to do with an index like this (bloated in a way\nthat bottom-up index deletion just won't help with) in an environment\nthat is similar to how the current \"skip index scan when # heap pages\nwith one or more LP_DEAD items < 2% of rel_pages\" thing. That\nmechanism has to be very conservative because we just don't know when\nthe next opportunity to vacuum indexes will be -- we almost have to\nassume that the decision will be static, and made exactly once, so we\nbetter be defensive. But why should that continue to be true with the\nconveyor belt stuff in place, and with periodic mini-vacuums that\ncoordinate over time? I don't think it has to be like that. We can\nmake it much more dynamic.\n\nI can imagine a two-way dialog between the index and between\nvacuumlazy.c that takes place over time. The index AM might be able to\nreport something along the lines of:\n\n\"While I think that this index has more dead index tuples then it\nreally should, the fact is that it hasn't grown at all, even by one\nsingle leaf page. And so don't you [vacuumlazy.c] should not make me\nvacuum the index right now. But be careful -- check back in again in\nanother minute or two, because the situation must be assumed to be\npretty volatile for now.\"\n\nThis is just an example, not a concrete design. My point is that\napproaching the problem dynamically makes it *vastly* easier to do the\nright thing. It's far easier to manage the consequences of being wrong\nthan it is to be right all the time. We're going to be wrong anyway,\nso better to be wrong on our own terms.\n\n> I'm not hung up on using the # of dead tuples specifically as the\n> metric for index vacuuming, and it may be best to pick some other\n> measure. But I am a little suspicious that if the only measure is past\n> index growth, we will let some situations go too far before we wake up\n> and do something about them. My intuition is that it would be a good\n> idea to come up with something we could measure, even if it's\n> imperfect, that would give us some clue that trouble is brewing before\n> pages actually start splitting. Now maybe my intuition is wrong and\n> there is nothing better, but I think it's worth a thought.\n\nWe will need something like that. I think that LP_DEAD items (or\nwould-be LP_DEAD items -- tuples with storage that would get pruned\ninto LP_DEAD items if we were to prune) in the table are much more\ninteresting than dead heap-only tuples, and also more interesting that\ndead index tuples. Especially the distribution of such LP_DEAD items\nin the table, and their concentration. That does seem much more likely\nto be robust as a quantitative driver of index vacuuming.\n\nIn the extreme case when there are a huge amount of LP_DEAD items in\nthe table, then we're going to want to make them LP_UNUSED anyway,\nwhich implies that we'll do index vacuuming to make it safe. Since\nthat's already going to be true, maybe we should try to find a way to\nusefully scale the behavior, so that maybe some indexes are vacuumed\nsooner when the number of LP_DEAD items is increasing. Not really\nsure, but that seems more promising than anything else.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 9 Feb 2022 11:27:26 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Wed, Feb 9, 2022 at 2:27 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> We should probably dispense with the idea that we'll be making these\n> decisions about what to do with an index like this (bloated in a way\n> that bottom-up index deletion just won't help with) in an environment\n> that is similar to how the current \"skip index scan when # heap pages\n> with one or more LP_DEAD items < 2% of rel_pages\" thing. That\n> mechanism has to be very conservative because we just don't know when\n> the next opportunity to vacuum indexes will be -- we almost have to\n> assume that the decision will be static, and made exactly once, so we\n> better be defensive. But why should that continue to be true with the\n> conveyor belt stuff in place, and with periodic mini-vacuums that\n> coordinate over time? I don't think it has to be like that. We can\n> make it much more dynamic.\n\nI'm not sure that we can. I mean, there's still only going to be ~3\nautovacuum workers, and there could be arbitrarily many tables. Even\nif the vacuum load is within the bounds of what the system can\nsustain, individual tables can't be assured of being visited\nfrequently (or so it seems to me) and it could be that there are\nactually not enough resources to vacuum and have to try to cope as\nbest we can. Less unnecessary vacuuming of large indexes can help, of\ncourse, but I'm not sure it fundamentally changes the calculus.\n\n> We will need something like that. I think that LP_DEAD items (or\n> would-be LP_DEAD items -- tuples with storage that would get pruned\n> into LP_DEAD items if we were to prune) in the table are much more\n> interesting than dead heap-only tuples, and also more interesting that\n> dead index tuples. Especially the distribution of such LP_DEAD items\n> in the table, and their concentration. That does seem much more likely\n> to be robust as a quantitative driver of index vacuuming.\n\nHmm... why would the answer have to do with dead items in the heap? I\nwas thinking along the lines of trying to figure out either a more\nreliable count of dead tuples in the index, subtracting out whatever\nwe save by kill_prior_tuple and bottom-up vacuuming; or else maybe a\ncount of the subset of dead tuples that are likely not to get\nopportunistically pruned in one way or another, if there's some way to\nguess that. Or maybe something where when we see an index page filling\nup we try to figure out (or guess) that it's close to really needing a\nsplit - i.e. that it's not full of tuples that we could just junk to\nmake space - and notice how often that's happening. I realize I'm\nhand-waving, but if the property is a property of the heap rather than\nthe index, how will different indexes get different treatment?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Feb 2022 16:40:59 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Wed, Feb 9, 2022 at 1:41 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I'm not sure that we can. I mean, there's still only going to be ~3\n> autovacuum workers, and there could be arbitrarily many tables. Even\n> if the vacuum load is within the bounds of what the system can\n> sustain, individual tables can't be assured of being visited\n> frequently (or so it seems to me) and it could be that there are\n> actually not enough resources to vacuum and have to try to cope as\n> best we can. Less unnecessary vacuuming of large indexes can help, of\n> course, but I'm not sure it fundamentally changes the calculus.\n\nYou seem to be vastly underestimating the value in being able to\nspread out and reschedule the work, and manage costs more generally.\nIf you can multiplex autovacuum workers across tables, by splitting up\nwork across a table's index over time, then it might not matter at all\nthat you only have 3 workers. If you can spread out the work over\ntime, then you make things much cheaper (fewer FPIs by aligning to\ncheckpoint boundaries). And, because you have a schedule that can be\ndynamically updated, you get to update your global view of the world\n(not just one table) before you've fully committed to it -- if you\nprovisionally say that you think that a certain index won't need to be\nvacuumed for a long time, that isn't the last word anymore.\n\nCosts are paid by the whole system, but benefits only go to individual\ntables and indexes. Being able to manage costs over time with a sense\nof the benefits, and a sense of high level priorities will be *huge*\nfor us. Managing debt at the level of the entire system (not just one\ntable or index) is also really important. (Though maybe we should just\nfocus on the v1, just because that's what is needed right now.)\n\n> > We will need something like that. I think that LP_DEAD items (or\n> > would-be LP_DEAD items -- tuples with storage that would get pruned\n> > into LP_DEAD items if we were to prune) in the table are much more\n> > interesting than dead heap-only tuples, and also more interesting that\n> > dead index tuples. Especially the distribution of such LP_DEAD items\n> > in the table, and their concentration. That does seem much more likely\n> > to be robust as a quantitative driver of index vacuuming.\n>\n> Hmm... why would the answer have to do with dead items in the heap?\n\nWe're eventually going to have to make the LP_DEAD items LP_UNUSED\nanyway here. So we might as well get started on that, with the index\nthat we *also* think is the one that might need it the most, for its\nown reasons. We're making a decision on the basis of multiple factors,\nknowing that in the worst case (when the index really didn't need\nanything at all) we will have at least had the benefit of doing some\nactually-useful work sooner rather than later. We should probably\nconsider multiple reasons to do any unit of work.\n\n> I was thinking along the lines of trying to figure out either a more\n> reliable count of dead tuples in the index, subtracting out whatever\n> we save by kill_prior_tuple and bottom-up vacuuming; or else maybe a\n> count of the subset of dead tuples that are likely not to get\n> opportunistically pruned in one way or another, if there's some way to\n> guess that.\n\nI don't know how to build something like that, since that works by\nunderstanding what's working, not by noticing that some existing\nstrategy plainly isn't working. The only positive information that I have\nconfidence in is the extreme case where you have zero index growth.\nWhich is certainly possible, but perhaps not that interesting with a\nreal workload.\n\nThere are emergent behaviors with bottom-up deletion. Purely useful\nbehaviors, as far as I know, but still very hard to precisely nail\ndown. For example, Victor Yegorov came up with an adversarial\nbenchmark [1] that showed that the technique dealt with index bloat\nfrom queue-like inserts and deletes that recycled the same distinct\nkey values over time, since they happened to be mixed with non-hot\nupdates. It dealt very well with that, even though *I had no clue*\nthat it would work *at all*, and might have even incorrectly predicted\nthe opposite if Victor had asked about it in advance.\n\n> I realize I'm\n> hand-waving, but if the property is a property of the heap rather than\n> the index, how will different indexes get different treatment?\n\nMaybe by making the primary key growth an indicator of what is\nreasonable for the other indexes (or other B-Tree indexes) -- it has a\nnatural tendency to be the least bloated possible index. If you have\nsomething like a GiST index, or if you have a B-Tree index that\nconstantly gets non-HOT updates that logically modify an indexed\ncolumn, then it should become reasonably obvious. Maybe there'd be\nsome kind of feedback behavior to lock in \"bloat prone index\" for a\ntime.\n\nIf we can bring costs into it too (e.g., spreading out the burden of\nindex vacuuming over time), then it becomes acceptable to incorrectly\ndetermine which index needed special attention. We will still remember\nthat that one index has been vacuumed up to a certain point, which is\nstill useful -- that work would have to have been completed either\nway, so it's really no real loss. Plus we've spread the burden out\nover time, which is always useful. The cost control stuff could easily\nmore than make up for the fact that we don't have a mythical perfect\nmodel that always knows exactly what to do, when, based on the needs\nof indexes.\n\nI think that expanding the scope to cover cost management actually\nmakes this project easier, not harder. Costs really matter, and are\nmuch easier to understand. Cost control makes it okay to guess about\nbenefits for the index/queries and be wrong.\n\n[1] https://www.postgresql.org/message-id/CAGnEbogATZS1mWMVX8FzZHMXzuDEcb10AnVwwhCtXtiBpg3XLQ@mail.gmail.com\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 9 Feb 2022 15:18:15 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Wed, Feb 9, 2022 at 7:43 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Feb 9, 2022 at 1:18 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> I think that dead index tuples really don't matter if they're going to\n> get removed anyway before a page split happens. In particular, if\n> we're going to do a bottom-up index deletion pass before splitting the\n> page, then who cares if there are going to be dead tuples around until\n> then? You might think that they'd have the unfortunate effect of\n> slowing down scans, and they could slow down ONE scan, but if they do,\n> then I think kill_prior_tuple will hint them dead and they won't\n> matter any more.\n\nActually I was not worried about the scan getting slow. What I was\nworried about is if we keep ignoring the dead tuples for long time\nthen in the worst case if we have huge number of dead tuples in the\nindex maybe 80% to 90% and then suddenly if we get a lot of insertion\nfor the keys which can not use bottom up deletion (due to the key\nrange). So now we have a lot of pages which have only dead tuples but\nwe will still allocate new pages because we ignored the dead tuple %\nand did not vacuum for a long time.\n\nIn short I am worried about creating a sudden bloat in the index due\nto a lot of existing dead tuples.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Feb 2022 13:39:48 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Wed, Feb 9, 2022 at 6:18 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> You seem to be vastly underestimating the value in being able to\n> spread out and reschedule the work, and manage costs more generally.\n\nHmm. I think you're vastly overestimating the extent to which it's\npossible to spread out and reschedule the work. I don't know which of\nus is wrong. From my point of view, if VACUUM is going to do a full\nphase 1 heap pass and a full phase 2 heap pass on either side of\nwhatever index work it does, there is no way that things are going to\nget that much more dynamic than they are today. And even if we didn't\ndo that, in order to make any progress setting LP_DEAD pointers to\nLP_UNUSED, you have to vacuum the entire index, which might be BIG. It\nwould be great to have a lot of granularity here but it doesn't seem\nachievable.\n\n> > I was thinking along the lines of trying to figure out either a more\n> > reliable count of dead tuples in the index, subtracting out whatever\n> > we save by kill_prior_tuple and bottom-up vacuuming; or else maybe a\n> > count of the subset of dead tuples that are likely not to get\n> > opportunistically pruned in one way or another, if there's some way to\n> > guess that.\n>\n> I don't know how to build something like that, since that works by\n> understanding what's working, not by noticing that some existing\n> strategy plainly isn't working. The only positive information that I have\n> confidence in is the extreme case where you have zero index growth.\n> Which is certainly possible, but perhaps not that interesting with a\n> real workload.\n>\n> There are emergent behaviors with bottom-up deletion. Purely useful\n> behaviors, as far as I know, but still very hard to precisely nail\n> down. For example, Victor Yegorov came up with an adversarial\n> benchmark [1] that showed that the technique dealt with index bloat\n> from queue-like inserts and deletes that recycled the same distinct\n> key values over time, since they happened to be mixed with non-hot\n> updates. It dealt very well with that, even though *I had no clue*\n> that it would work *at all*, and might have even incorrectly predicted\n> the opposite if Victor had asked about it in advance.\n\nI don't understand what your point is in these two paragraphs. I'm\njust arguing that, if a raw dead tuple count is meaningless because a\nlot of them are going to disappear harmlessly with or without vacuum,\nit's reasonable to try to get around that problem by counting the\nsubset of dead tuples where that isn't true. I agree that it's unclear\nhow to do that, but that doesn't mean that it can't be done.\n\n> > I realize I'm\n> > hand-waving, but if the property is a property of the heap rather than\n> > the index, how will different indexes get different treatment?\n>\n> Maybe by making the primary key growth an indicator of what is\n> reasonable for the other indexes (or other B-Tree indexes) -- it has a\n> natural tendency to be the least bloated possible index. If you have\n> something like a GiST index, or if you have a B-Tree index that\n> constantly gets non-HOT updates that logically modify an indexed\n> column, then it should become reasonably obvious. Maybe there'd be\n> some kind of feedback behavior to lock in \"bloat prone index\" for a\n> time.\n\nI have the same concern about this as what I mentioned before: it's\npurely retrospective. Therefore in my mind it's a very reasonable\nchoice for a backstop, but not a somewhat unsatisfying choice for a\nprimary mechanism.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Feb 2022 14:13:56 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Thu, Feb 10, 2022 at 3:10 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Actually I was not worried about the scan getting slow. What I was\n> worried about is if we keep ignoring the dead tuples for long time\n> then in the worst case if we have huge number of dead tuples in the\n> index maybe 80% to 90% and then suddenly if we get a lot of insertion\n> for the keys which can not use bottom up deletion (due to the key\n> range). So now we have a lot of pages which have only dead tuples but\n> we will still allocate new pages because we ignored the dead tuple %\n> and did not vacuum for a long time.\n\nIt seems like a reasonable concern to me ... and I think it's somewhat\nrelated to my comments about trying to distinguish which dead tuples\nmatter vs. which don't.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Feb 2022 14:16:11 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Thu, Feb 10, 2022 at 11:16 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Feb 10, 2022 at 3:10 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Actually I was not worried about the scan getting slow. What I was\n> > worried about is if we keep ignoring the dead tuples for long time\n> > then in the worst case if we have huge number of dead tuples in the\n> > index maybe 80% to 90% and then suddenly if we get a lot of insertion\n> > for the keys which can not use bottom up deletion (due to the key\n> > range). So now we have a lot of pages which have only dead tuples but\n> > we will still allocate new pages because we ignored the dead tuple %\n> > and did not vacuum for a long time.\n>\n> It seems like a reasonable concern to me ... and I think it's somewhat\n> related to my comments about trying to distinguish which dead tuples\n> matter vs. which don't.\n\nIt's definitely a reasonable concern. But once you find yourself in\nthis situation, *every* index will need to be vacuumed anyway, pretty\nmuch as soon as possible. There will be many LP_DEAD items in the\nheap, which will be enough to force index vacuuming of all indexes.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 10 Feb 2022 11:21:30 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" }, { "msg_contents": "On Thu, Feb 10, 2022 at 11:14 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Hmm. I think you're vastly overestimating the extent to which it's\n> possible to spread out and reschedule the work. I don't know which of\n> us is wrong. From my point of view, if VACUUM is going to do a full\n> phase 1 heap pass and a full phase 2 heap pass on either side of\n> whatever index work it does, there is no way that things are going to\n> get that much more dynamic than they are today.\n\nWaiting to vacuum each index allows us to wait until the next VACUUM\noperation on the table, giving us more TIDs to remove when we do go to\nvacuum one of these large indexes. Making decisions dynamically seems\nvery promising because it gives us the most flexibility. In principle\nthe workload might not allow for any of that, but in practice I think\nthat it will.\n\n> I don't understand what your point is in these two paragraphs. I'm\n> just arguing that, if a raw dead tuple count is meaningless because a\n> lot of them are going to disappear harmlessly with or without vacuum,\n> it's reasonable to try to get around that problem by counting the\n> subset of dead tuples where that isn't true. I agree that it's unclear\n> how to do that, but that doesn't mean that it can't be done.\n\nVACUUM is a participant in the system -- it sees how physical\nrelations are affected by the workload, but it also sees how physical\nrelations are affected by previous VACUUM operations. If it goes to\nVACUUM an index on the basis of a relatively small difference (that\nmight just be noise), and does so systematically and consistently,\nthat might have unintended consequences. In particular, we might do\nthe wrong thing, again and again, because we're overinterpreting noise\nagain and again.\n\n> I have the same concern about this as what I mentioned before: it's\n> purely retrospective. Therefore in my mind it's a very reasonable\n> choice for a backstop, but not a somewhat unsatisfying choice for a\n> primary mechanism.\n\nI'm not saying that it's impossible or even unreasonable to do\nsomething based on the current or anticipated state of the index,\nexactly. Just that you have to be realistic about how accurate that\nmodel is going to be in practice. In practice it'll be quite noisy,\nand that must be accounted for. For example, we could deliberately\ncoarsen the information, so that only relatively large differences in\napparent-bloatedness are visible to the model.\n\nThe other thing is that VACUUM itself cannot be expected to operate\nwith all that much precision, just because of how it works at a high\nlevel. Any quantitative measure will only be meaningful as a way of\nprioritizing work. Which is going to be far easier by making the\nbehavior dynamic, and continually reassessing. Once a relatively large\ndifference among two indexes first emerges, we can be relatively\nconfident about what to do. But smaller differences are likely just\nnoise.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 10 Feb 2022 11:35:32 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: decoupling table and index vacuum" } ]
[ { "msg_contents": "The multirange constructors created in makeMultirangeConstructors() are:\n\nmultirange_constructor0 -> not strict\nmultirange_constructor1 -> strict\nmultirange_constructor2 -> not strict\n\nAnd both multirange_constructor1 and multirange_constructor2 contain \ncode like\n\n/*\n * These checks should be guaranteed by our signature, but let's do them\n * just in case.\n */\nif (PG_ARGISNULL(0))\n ereport(ERROR,\n (errcode(ERRCODE_NULL_VALUE_NOT_ALLOWED),\n errmsg(\"multirange values cannot contain NULL members\")));\n\nIn case of multirange_constructor2 the \"should be guaranteed\" comment is \nnot actually true right now. In case of multirange_constructor1, maybe \nthis should be downgraded to an elog or assert or just removed.\n\nIs there a reason why we can't make them all three strict or all not \nstrict? (Obviously, it doesn't matter for multirange_constructor0.) Is \nthe fact that multirange_constructor2 is variadic the issue? Maybe at \nleast some more comments would be helpful.\n\n\n", "msg_date": "Wed, 21 Apr 2021 22:56:54 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "multirange constructor strictness" }, { "msg_contents": "On Wed, Apr 21, 2021 at 11:57 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> The multirange constructors created in makeMultirangeConstructors() are:\n>\n> multirange_constructor0 -> not strict\n> multirange_constructor1 -> strict\n> multirange_constructor2 -> not strict\n>\n> And both multirange_constructor1 and multirange_constructor2 contain\n> code like\n>\n> /*\n> * These checks should be guaranteed by our signature, but let's do them\n> * just in case.\n> */\n> if (PG_ARGISNULL(0))\n> ereport(ERROR,\n> (errcode(ERRCODE_NULL_VALUE_NOT_ALLOWED),\n> errmsg(\"multirange values cannot contain NULL members\")));\n>\n> In case of multirange_constructor2 the \"should be guaranteed\" comment is\n> not actually true right now. In case of multirange_constructor1, maybe\n> this should be downgraded to an elog or assert or just removed.\n>\n> Is there a reason why we can't make them all three strict or all not\n> strict? (Obviously, it doesn't matter for multirange_constructor0.) Is\n> the fact that multirange_constructor2 is variadic the issue? Maybe at\n> least some more comments would be helpful.\n\nThank you for noticing. I'll take care of it today.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Thu, 22 Apr 2021 14:00:38 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: multirange constructor strictness" } ]
[ { "msg_contents": "Hi,\n\nIn the PageGetItemIdCareful() introduced by commit a9ce839a, it seems\nlike we are using btree page pd_special structure BTPageOpaqueData for\nerror case without max aligning it.\n if (ItemIdGetOffset(itemid) + ItemIdGetLength(itemid) >\n BLCKSZ - sizeof(BTPageOpaqueData))\n ereport(ERROR,\n\nI'm not sure if it is intentional. ISTM that this was actually not a\nproblem because the BTPageOpaqueData already has all-aligned(???)\nmembers (3 uint32, 2 uint16). But it might be a problem if we add\nunaligned bytes. PageInit always max aligns this structure, when we\ninitialize the btree page in _bt_pageini and in all other places we\nmax align it before use. Since this is an error throwing path, I think\nwe should max align it just to be on the safer side. While on it, I\nthink we can also replace BLCKSZ with PageGetPageSize(page).\n\nAttaching a small patch. Thoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 22 Apr 2021 10:40:13 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "PageGetItemIdCareful - should we MAXALIGN sizeof(BTPageOpaqueData)?" }, { "msg_contents": "On Thu, Apr 22, 2021 at 10:40 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> In the PageGetItemIdCareful() introduced by commit a9ce839a, it seems\n> like we are using btree page pd_special structure BTPageOpaqueData for\n> error case without max aligning it.\n> if (ItemIdGetOffset(itemid) + ItemIdGetLength(itemid) >\n> BLCKSZ - sizeof(BTPageOpaqueData))\n> ereport(ERROR,\n>\n> I'm not sure if it is intentional. ISTM that this was actually not a\n> problem because the BTPageOpaqueData already has all-aligned(???)\n> members (3 uint32, 2 uint16). But it might be a problem if we add\n> unaligned bytes. PageInit always max aligns this structure, when we\n> initialize the btree page in _bt_pageini and in all other places we\n> max align it before use. Since this is an error throwing path, I think\n> we should max align it just to be on the safer side. While on it, I\n> think we can also replace BLCKSZ with PageGetPageSize(page).\n>\n> Attaching a small patch. Thoughts?\n\n+1 for changing to MAXALIGN(sizeof(BTPageOpaqueData)).\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 22 Apr 2021 11:36:24 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PageGetItemIdCareful - should we MAXALIGN\n sizeof(BTPageOpaqueData)?" }, { "msg_contents": "On Thu, Apr 22, 2021 at 11:36 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Apr 22, 2021 at 10:40 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > In the PageGetItemIdCareful() introduced by commit a9ce839a, it seems\n> > like we are using btree page pd_special structure BTPageOpaqueData for\n> > error case without max aligning it.\n> > if (ItemIdGetOffset(itemid) + ItemIdGetLength(itemid) >\n> > BLCKSZ - sizeof(BTPageOpaqueData))\n> > ereport(ERROR,\n> >\n> > I'm not sure if it is intentional. ISTM that this was actually not a\n> > problem because the BTPageOpaqueData already has all-aligned(???)\n> > members (3 uint32, 2 uint16). But it might be a problem if we add\n> > unaligned bytes. PageInit always max aligns this structure, when we\n> > initialize the btree page in _bt_pageini and in all other places we\n> > max align it before use. Since this is an error throwing path, I think\n> > we should max align it just to be on the safer side. While on it, I\n> > think we can also replace BLCKSZ with PageGetPageSize(page).\n> >\n> > Attaching a small patch. Thoughts?\n>\n> +1 for changing to MAXALIGN(sizeof(BTPageOpaqueData)).\n\nThanks for taking a look at it. I added a CF entry\nhttps://commitfest.postgresql.org/33/3089/ so that we don't lose track\nof it.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 23 Apr 2021 08:09:48 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PageGetItemIdCareful - should we MAXALIGN\n sizeof(BTPageOpaqueData)?" }, { "msg_contents": "On Wed, Apr 21, 2021 at 10:10 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> In the PageGetItemIdCareful() introduced by commit a9ce839a, it seems\n> like we are using btree page pd_special structure BTPageOpaqueData for\n> error case without max aligning it.\n> if (ItemIdGetOffset(itemid) + ItemIdGetLength(itemid) >\n> BLCKSZ - sizeof(BTPageOpaqueData))\n> ereport(ERROR,\n>\n> I'm not sure if it is intentional. ISTM that this was actually not a\n> problem because the BTPageOpaqueData already has all-aligned(???)\n> members (3 uint32, 2 uint16). But it might be a problem if we add\n> unaligned bytes.\n\nFair point. I pushed a commit to fix this to HEAD just now. Thanks.\n\n> PageInit always max aligns this structure, when we\n> initialize the btree page in _bt_pageini and in all other places we\n> max align it before use. Since this is an error throwing path, I think\n> we should max align it just to be on the safer side. While on it, I\n> think we can also replace BLCKSZ with PageGetPageSize(page).\n\nI didn't replace BLCKSZ with PageGetPageSize() in the commit, though.\nWe definitely don't want to rely on that being sane in amcheck (this\nis also why we don't use PageGetSpecialPointer(), which is the usual\napproach).\n\nActually, even if this wasn't amcheck code I might make the same call.\nI personally don't think that most existing calls to PageGetPageSize()\nmake very much sense.\n\n> Attaching a small patch. Thoughts?\n\nI'm curious: Was this just something that you noticed randomly, while\nlooking at the code? Or do you have a specific practical reason to\ncare about it? (I always like hearing about the ways in which people\nuse amcheck.)\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 23 Apr 2021 15:41:23 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PageGetItemIdCareful - should we MAXALIGN\n sizeof(BTPageOpaqueData)?" }, { "msg_contents": "On Sat, Apr 24, 2021 at 4:11 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > PageInit always max aligns this structure, when we\n> > initialize the btree page in _bt_pageini and in all other places we\n> > max align it before use. Since this is an error throwing path, I think\n> > we should max align it just to be on the safer side. While on it, I\n> > think we can also replace BLCKSZ with PageGetPageSize(page).\n>\n> I didn't replace BLCKSZ with PageGetPageSize() in the commit, though.\n> We definitely don't want to rely on that being sane in amcheck (this\n> is also why we don't use PageGetSpecialPointer(), which is the usual\n> approach).\n\nIf the PageGetPageSize can't be sane within amcheck, does it mean that\nthe page would have been corrupted somewhere?\n\n> Actually, even if this wasn't amcheck code I might make the same call.\n> I personally don't think that most existing calls to PageGetPageSize()\n> make very much sense.\n\nShould we get rid of all existing PageGetPageSize and directly use\nBLCKSZ instead? AFAICS, all the index and heap pages are of BLCKSZ\n(PageInit has Assert(pageSize == BLCKSZ);).\n\nUsing PageGetPageSize to get the size that's been stored in the page,\nwe might catch errors early if at all the page is corrupted and the\nsize is overwritten . That's not the case if we use BLCKSZ which is\nnot stored in the page. In this case the size stored on the page\nbecomes redundant and the pd_pagesize_version could just be 2 bytes\nstoring the page version. While we save 2 bytes per page, I'm not sure\nthis is acceptable as PageHeader size gets changed.\n\n> I'm curious: Was this just something that you noticed randomly, while\n> looking at the code? Or do you have a specific practical reason to\n> care about it? (I always like hearing about the ways in which people\n> use amcheck.)\n\nI found this while working on one internal feature but not while using amcheck.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 26 Apr 2021 08:41:45 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PageGetItemIdCareful - should we MAXALIGN\n sizeof(BTPageOpaqueData)?" } ]
[ { "msg_contents": "Hi all,\n\nCan you please explain the process of adding new items into autoconf\nscripts? Specifically into configure.ac. For example, if I want to add a\nnew --with-foo argument, let's say a new 3rd party library. What should I\ndo after proper configure.ac modification? Should I also re-generate\nconfigure script with local autoreconf? My doubts are that changes to\nconfigure script can be rather huge and likely conflicting with other\npatches that possibly do the same. Thanks!\n\n-- \nBest Regards,\nIan Zagorskikh\nCloudLinux: https://www.cloudlinux.com/\n\nHi all,Can you please explain the process of adding new items into autoconf scripts? Specifically into configure.ac. For example, if I want to add a new --with-foo argument, let's say a new 3rd party library. What should I do after proper configure.ac modification? Should I also re-generate configure script with local autoreconf? My doubts are that changes to configure script can be rather huge and likely conflicting with other patches that possibly do the same. Thanks!-- Best Regards,Ian ZagorskikhCloudLinux: https://www.cloudlinux.com/", "msg_date": "Thu, 22 Apr 2021 05:46:52 +0000", "msg_from": "Ian Zagorskikh <izagorskikh@cloudlinux.com>", "msg_from_op": true, "msg_subject": "Procedure of modification of autoconf scripts" }, { "msg_contents": "Ian Zagorskikh <izagorskikh@cloudlinux.com> writes:\n> Can you please explain the process of adding new items into autoconf\n> scripts? Specifically into configure.ac. For example, if I want to add a\n> new --with-foo argument, let's say a new 3rd party library. What should I\n> do after proper configure.ac modification? Should I also re-generate\n> configure script with local autoreconf? My doubts are that changes to\n> configure script can be rather huge and likely conflicting with other\n> patches that possibly do the same. Thanks!\n\nIf you see massive changes in the configure script after a localized\nchange in configure.ac, it probably means that you're not using the\nright autoconf version.\n\nOur project convention is to use exactly the GNU release of whichever\nversion of autoconf we're on (currently 2.69). A lot of vendors ship\nmodified-to-some-extent autoconf versions, which can result in these\nsorts of unwanted changes if you just use whatever is on your\noperating system. Grab the official release off a GNU mirror and\ninstall it somewhere handy, and use that.\n\nAs a test case, try running autoconf and autoheader *without*\nhaving changed the input files. If the outputs don't match\nwhat's in git, then you've got something to fix.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Apr 2021 01:58:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Procedure of modification of autoconf scripts" }, { "msg_contents": "Tom,\n\nThank you now it's clear!\n\nOn Thu, Apr 22, 2021 at 5:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n\n> If you see massive changes in the configure script after a localized\n> change in configure.ac, it probably means that you're not using the\n> right autoconf version.\n>\n> Our project convention is to use exactly the GNU release of whichever\n> version of autoconf we're on (currently 2.69). A lot of vendors ship\n> modified-to-some-extent autoconf versions, which can result in these\n> sorts of unwanted changes if you just use whatever is on your\n> operating system. Grab the official release off a GNU mirror and\n> install it somewhere handy, and use that.\n>\n> As a test case, try running autoconf and autoheader *without*\n> having changed the input files. If the outputs don't match\n> what's in git, then you've got something to fix.\n>\n> regards, tom lane\n>\n\n-- \nBest Regards,\nIan Zagorskikh\nCloudLinux: https://www.cloudlinux.com/\n\nTom,Thank you now it's clear!On Thu, Apr 22, 2021 at 5:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote: If you see massive changes in the configure script after a localized\nchange in configure.ac, it probably means that you're not using the\nright autoconf version.\n\nOur project convention is to use exactly the GNU release of whichever\nversion of autoconf we're on (currently 2.69).  A lot of vendors ship\nmodified-to-some-extent autoconf versions, which can result in these\nsorts of unwanted changes if you just use whatever is on your\noperating system.  Grab the official release off a GNU mirror and\ninstall it somewhere handy, and use that.\n\nAs a test case, try running autoconf and autoheader *without*\nhaving changed the input files.  If the outputs don't match\nwhat's in git, then you've got something to fix.\n\n                        regards, tom lane\n-- Best Regards,Ian ZagorskikhCloudLinux: https://www.cloudlinux.com/", "msg_date": "Thu, 22 Apr 2021 06:01:47 +0000", "msg_from": "Ian Zagorskikh <izagorskikh@cloudlinux.com>", "msg_from_op": true, "msg_subject": "Re: Procedure of modification of autoconf scripts" } ]
[ { "msg_contents": "I found some doubious messages.\n\ncatalog.c:380, 404\n> errdetail(\"OID candidates were checked \\\"%llu\\\" times, but no unused OID is yet found.\",\n> (errmsg(\"new OID has been assigned in relation \\\"%s\\\" after \\\"%llu\\\" retries\",\n\nIt looks strange that %llu is enclosed by double-quotes and followed by\ntwo spaces.\n\npg_inherits.c:542\n> errhint(\"Use ALTER TABLE ... DETACH PARTITION ... FINALIZE to complete the pending detach operation\")));\npg_type.c:991\n> errhint(\"You can manually specify a multirange type name using the \\\"multirange_type_name\\\" attribute\")));\n\nA period is missing.\n\nsearch_cte.c: 520, 527\n> errmsg(\"search sequence column name and cycle mark column name are the same\"),\n> errmsg(\"search_sequence column name and cycle path column name are the same\"),\n\nThe underscore in the latter seems like a typo.\n\n\npartbounds.c: 2871, 2902\n> errdetail(\"The new modulus %d is not a factor of %d, the modulus of existing partition \\\"%s\\\".\",\n> errdetail(\"The new modulus %d is not factor of %d, the modulus of existing partition \\\"%s\\\".\",\n\nThe latter seems to be missing an article.\n\nA possible fix is attched.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 22 Apr 2021 17:31:34 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Some doubious messages" }, { "msg_contents": "On Thu, Apr 22, 2021 at 2:01 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> I found some doubious messages.\n>\n> catalog.c:380, 404\n> > errdetail(\"OID candidates were checked \\\"%llu\\\" times, but no unused OID is yet found.\",\n> > (errmsg(\"new OID has been assigned in relation \\\"%s\\\" after \\\"%llu\\\" retries\",\n>\n> It looks strange that %llu is enclosed by double-quotes and followed by\n> two spaces.\n\nYeah, we use double quotes for strings to separate out from the\nmessage text, but for integers it doesn't make sense.\n\n> pg_inherits.c:542\n> > errhint(\"Use ALTER TABLE ... DETACH PARTITION ... FINALIZE to complete the pending detach operation\")));\n> pg_type.c:991\n> > errhint(\"You can manually specify a multirange type name using the \\\"multirange_type_name\\\" attribute\")));\n>\n> A period is missing.\n\nYeah, we usually end the errdetail or errhit messages with a period.\n\n> search_cte.c: 520, 527\n> > errmsg(\"search sequence column name and cycle mark column name are the same\"),\n> > errmsg(\"search_sequence column name and cycle path column name are the same\"),\n>\n> The underscore in the latter seems like a typo.\n\nYeah.\n\n> partbounds.c: 2871, 2902\n> > errdetail(\"The new modulus %d is not a factor of %d, the modulus of existing partition \\\"%s\\\".\",\n> > errdetail(\"The new modulus %d is not factor of %d, the modulus of existing partition \\\"%s\\\".\",\n>\n> The latter seems to be missing an article.\n\nHmmm.\n\n> A possible fix is attched.\n\nPatch is failing make check, it is missing to incorporate test case\noutput changes.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 23 Apr 2021 08:34:30 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Some doubious messages" } ]
[ { "msg_contents": "Hi,\n\nI found some possible redundant comments in fmgr.c\n\n1.\nfmgr_symbol(Oid functionId, char **mod, char **fn)\n{\n HeapTuple procedureTuple;\n Form_pg_proc procedureStruct;\n bool isnull;\n Datum prosrcattr;\n Datum probinattr;\n- /* Otherwise we need the pg_proc entry */\n procedureTuple = SearchSysCache1(PROCOID, ObjectIdGetDatum(functionId));\n\nI guess the comment here was miscopied from fmgr_info_cxt_security:\n\n\n\n2.\n if (!HeapTupleIsValid(procedureTuple))\n elog(ERROR, \"cache lookup failed for function %u\", functionId);\n procedureStruct = (Form_pg_proc) GETSTRUCT(procedureTuple);\n- /*\n- */\n\nBest regards,\nhouzj", "msg_date": "Thu, 22 Apr 2021 11:44:10 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "Fix redundant comments in fmgr.c" }, { "msg_contents": "On Thu, Apr 22, 2021 at 11:44:10AM +0000, houzj.fnst@fujitsu.com wrote:\n> I found some possible redundant comments in fmgr.c\n\nThanks, fixed. I have noticed one extra inconsistency at the top of\nfmgr_symbol().\n\n> I guess the comment here was miscopied from fmgr_info_cxt_security:\n\nRight, coming right from the fast path in the other function.\n--\nMichael", "msg_date": "Fri, 23 Apr 2021 13:36:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix redundant comments in fmgr.c" } ]
[ { "msg_contents": "The docs don't explicitly mention the reduced lock level for this subcommand.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Thu, 22 Apr 2021 12:49:49 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Docs for lock level of ALTER TABLE .. VALIDATE" }, { "msg_contents": "On 2021-Apr-22, Simon Riggs wrote:\n\n> The docs don't explicitly mention the reduced lock level for this subcommand.\n\nHmm, true. Pushed to all branches, thanks.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Thu, 6 May 2021 17:19:29 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Docs for lock level of ALTER TABLE .. VALIDATE" } ]
[ { "msg_contents": "897795240cfaaed724af2f53ed2c50c9862f951f forgot to reduce the lock\nlevel for CHECK constraints when allowing them to be NOT VALID.\n\nThis is simple and safe, since check constraints are not used in\nplanning until validated.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Thu, 22 Apr 2021 13:00:40 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Reduce lock level for ALTER TABLE ... ADD CHECK .. NOT VALID" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nLooks fine to me\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Fri, 28 May 2021 14:10:00 +0000", "msg_from": "Greg Sabino Mullane <htamfids@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Reduce lock level for ALTER TABLE ... ADD CHECK .. NOT VALID" }, { "msg_contents": "On Thu, Apr 22, 2021 at 8:01 AM Simon Riggs <simon.riggs@enterprisedb.com>\nwrote:\n>\n> 897795240cfaaed724af2f53ed2c50c9862f951f forgot to reduce the lock\n> level for CHECK constraints when allowing them to be NOT VALID.\n>\n> This is simple and safe, since check constraints are not used in\n> planning until validated.\n\nThe patch also reduces the lock level when NOT VALID is not specified,\nwhich didn't seem to be the intention.\n\n# begin;\nBEGIN\n*# alter table alterlock2 add check (f1 > 0);\nALTER TABLE\n*# select * from my_locks order by 1;\n relname | max_lockmode\n------------+-----------------------\n alterlock2 | ShareRowExclusiveLock\n(1 row)\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Apr 22, 2021 at 8:01 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:>> 897795240cfaaed724af2f53ed2c50c9862f951f forgot to reduce the lock> level for CHECK constraints when allowing them to be NOT VALID.>> This is simple and safe, since check constraints are not used in> planning until validated.The patch also reduces the lock level when NOT VALID is not specified, which didn't seem to be the intention.# begin;BEGIN*# alter table alterlock2 add check (f1 > 0);ALTER TABLE*# select * from my_locks order by 1;  relname   |     max_lockmode------------+----------------------- alterlock2 | ShareRowExclusiveLock(1 row)--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Sat, 10 Jul 2021 09:49:58 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Reduce lock level for ALTER TABLE ... ADD CHECK .. NOT VALID" }, { "msg_contents": "On Sat, Jul 10, 2021 at 2:50 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> On Thu, Apr 22, 2021 at 8:01 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> >\n> > 897795240cfaaed724af2f53ed2c50c9862f951f forgot to reduce the lock\n> > level for CHECK constraints when allowing them to be NOT VALID.\n> >\n> > This is simple and safe, since check constraints are not used in\n> > planning until validated.\n>\n> The patch also reduces the lock level when NOT VALID is not specified, which didn't seem to be the intention.\n\nThank you for reviewing. I agree that the behavior works as you indicated.\n\nMy description of this was slightly muddled. The lock level for\nCONSTR_FOREIGN applies whether or not NOT VALID is used, but the test\ncase covers only NOT VALID because it a) isn't tested and b) is more\nimportant. I just followed that earlier pattern and that led me to\nadding \"NOT VALID\" onto the title of the thread.\n\nWhat is true for CONSTR_FOREIGN is also true for CONSTR_CHECK - the\nlock level can be set down to ShareRowExclusiveLock in all cases\nbecause adding a new CHECK does not affect the outcome of currently\nexecuting SELECT statements. (Note that this is not true for Drop\nConstraint, which has a different lock level, but we aren't changing\nthat here). Once the constraint is validated it may influence the\noptimization of later SELECTs.\n\nSo the patch and included docs are completely correct. Notice that the\nname of the patch reflects this better than the title of the thread.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 15 Jul 2021 07:47:58 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Reduce lock level for ALTER TABLE ... ADD CHECK .. NOT VALID" }, { "msg_contents": "On Thu, 15 Jul 2021 at 07:47, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Sat, Jul 10, 2021 at 2:50 PM John Naylor\n> <john.naylor@enterprisedb.com> wrote:\n> > On Thu, Apr 22, 2021 at 8:01 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> > >\n> > > 897795240cfaaed724af2f53ed2c50c9862f951f forgot to reduce the lock\n> > > level for CHECK constraints when allowing them to be NOT VALID.\n> > >\n> > > This is simple and safe, since check constraints are not used in\n> > > planning until validated.\n> >\n> > The patch also reduces the lock level when NOT VALID is not specified, which didn't seem to be the intention.\n>\n> Thank you for reviewing. I agree that the behavior works as you indicated.\n>\n> My description of this was slightly muddled. The lock level for\n> CONSTR_FOREIGN applies whether or not NOT VALID is used, but the test\n> case covers only NOT VALID because it a) isn't tested and b) is more\n> important. I just followed that earlier pattern and that led me to\n> adding \"NOT VALID\" onto the title of the thread.\n>\n> What is true for CONSTR_FOREIGN is also true for CONSTR_CHECK - the\n> lock level can be set down to ShareRowExclusiveLock in all cases\n> because adding a new CHECK does not affect the outcome of currently\n> executing SELECT statements. (Note that this is not true for Drop\n> Constraint, which has a different lock level, but we aren't changing\n> that here). Once the constraint is validated it may influence the\n> optimization of later SELECTs.\n>\n> So the patch and included docs are completely correct. Notice that the\n> name of the patch reflects this better than the title of the thread.\n\nAn additional patch covering other types of ALTER TABLE attached. Both\ncan be applied independently.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Tue, 3 Aug 2021 21:59:24 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Reduce lock level for ALTER TABLE ... ADD CHECK .. NOT VALID" }, { "msg_contents": "Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> 897795240cfaaed724af2f53ed2c50c9862f951f forgot to reduce the lock\n> level for CHECK constraints when allowing them to be NOT VALID.\n> This is simple and safe, since check constraints are not used in\n> planning until validated.\n\nUnfortunately, just asserting that it's safe doesn't make it so.\n\nWe have two things that we need to worry about when considering\nreducing ALTER TABLE lock levels:\n\n1. Is it semantically okay (which is what you claim above)?\n\n2. Will onlooker processes see sufficiently-consistent catalog data\nif they look at the table during the change?\n\nTrying to reduce the lock level for ADD CHECK fails the second\ntest, because it has to alter two different catalogs. It has\nto increment pg_class.relchecks, and it has to make an entry in\npg_constraint. This patch makes it possible for onlookers to\nsee a value of pg_class.relchecks that is inconsistent with what\nthey find in pg_constraint, and then they will blow up.\n\nTo demonstrate this, I applied the patch and then did this\nin session 1:\n\nregression=# create table mytable (f1 int check(f1 > 0), f2 int);\nCREATE TABLE\n\nI then started a second session, attached to it with gdb, and\nset a breakpoint at CheckConstraintFetch. Letting that session\ncontinue, I told it\n\nregression=# select * from mytable;\n\nwhich of course reached the breakpoint at CheckConstraintFetch.\n(At this point, session 2 has read the pg_class entry for mytable,\nseen relchecks == 1, and now it wants to read pg_constraint.)\n\nI then told session 1:\n\nregression=# alter table mytable add check (f2 > 0);\nALTER TABLE\n\nwhich it happily did thanks to the now-inadequate lock level.\nI then released session 2 to continue, and behold it complains:\n\nWARNING: unexpected pg_constraint record found for relation \"mytable\"\nLINE 1: select * from mytable;\n ^\n\n(Pre-v14 branches would have made that an ERROR not a WARNING.)\nThat happens because the systable_beginscan() in CheckConstraintFetch\nwill get a new snapshot, so now it sees the new entry in pg_constraint,\nmaking the count of entries inconsistent with what it found in pg_class.\n\nIt's possible that this could be made safe if we replaced the exact\n\"relchecks\" count with a boolean \"relhaschecks\", analogous to the\nway indexes are handled. It's not clear to me though that the effort,\nand ensuing compatibility costs for applications that look at pg_class,\nwould be repaid by having a bit more concurrency here. One could\nalso worry about whether we really want to give up this consistency\ncross-check between pg_class and pg_constraint.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 04 Sep 2021 15:28:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Reduce lock level for ALTER TABLE ... ADD CHECK .. NOT VALID" }, { "msg_contents": "On Sat, 4 Sept 2021 at 20:28, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> We have two things that we need to worry about when considering\n> reducing ALTER TABLE lock levels:\n>\n> 1. Is it semantically okay (which is what you claim above)?\n>\n> 2. Will onlooker processes see sufficiently-consistent catalog data\n> if they look at the table during the change?\n>\n> Trying to reduce the lock level for ADD CHECK fails the second\n> test, because it has to alter two different catalogs. It has\n> to increment pg_class.relchecks, and it has to make an entry in\n> pg_constraint. This patch makes it possible for onlookers to\n> see a value of pg_class.relchecks that is inconsistent with what\n> they find in pg_constraint, and then they will blow up.\n\nThanks for the review. I will check this consideration for any future patches.\n\n> That happens because the systable_beginscan() in CheckConstraintFetch\n> will get a new snapshot, so now it sees the new entry in pg_constraint,\n> making the count of entries inconsistent with what it found in pg_class.\n\nThis is clearly important and we must now return the patch with feedback.\n\nI've looked at other similar cases and can't find any bugs in other areas, phew!\n\n> It's possible that this could be made safe if we replaced the exact\n> \"relchecks\" count with a boolean \"relhaschecks\", analogous to the\n> way indexes are handled. It's not clear to me though that the effort,\n> and ensuing compatibility costs for applications that look at pg_class,\n> would be repaid by having a bit more concurrency here. One could\n> also worry about whether we really want to give up this consistency\n> cross-check between pg_class and pg_constraint.\n\nI will work on a patch for this and see how complex it is.\n\nAt very least I will add a longer comment patch to mention this for the future.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Sun, 3 Oct 2021 17:51:57 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Reduce lock level for ALTER TABLE ... ADD CHECK .. NOT VALID" } ]
[ { "msg_contents": "Hi\n\nWhen try to improve the tab compleation feature in [1], I found an existing problem and a typo.\nThe patch was attached, please kindly to take a look at it. Thanks.\n\n[1]\nhttps://www.postgresql.org/message-id/OS0PR01MB61131A4347D385F02F60E123FB469%40OS0PR01MB6113.jpnprd01.prod.outlook.com\n\nRegards,\nTang", "msg_date": "Thu, 22 Apr 2021 12:44:28 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "use pg_strncasecmp to replace strncmp when compare \"pg_\"" }, { "msg_contents": "At Thu, 22 Apr 2021 12:44:28 +0000, \"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com> wrote in \n> Hi\n> \n> When try to improve the tab compleation feature in [1], I found an existing problem and a typo.\n> The patch was attached, please kindly to take a look at it. Thanks.\n> \n> [1]\n> htdrop indetps://www.postgresql.org/message-id/OS0PR01MB61131A4347D385F02F60E123FB469%40OS0PR01MB6113.jpnprd01.prod.outlook.com\n\nThat doesn't matter at all for now since we match schema identifiers\ncase-sensitively. Maybe it should be a part of the patch in [1].\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 23 Apr 2021 13:13:20 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: use pg_strncasecmp to replace strncmp when compare \"pg_\"" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Thu, 22 Apr 2021 12:44:28 +0000, \"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com> wrote in \n>> When try to improve the tab compleation feature in [1], I found an existing problem and a typo.\n>> The patch was attached, please kindly to take a look at it. Thanks.\n\n> That doesn't matter at all for now since we match schema identifiers\n> case-sensitively. Maybe it should be a part of the patch in [1].\n\nYeah --- maybe this'd make sense as part of a full patch to improve\ntab-complete.c's handling of case folding, but I'm suspicious that\napplying it on its own would just make things less consistent.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Apr 2021 01:06:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: use pg_strncasecmp to replace strncmp when compare \"pg_\"" }, { "msg_contents": "On Friday, April 23, 2021 2:06 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote\n\n>>Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n>> That doesn't matter at all for now since we match schema identifiers\n>> case-sensitively. Maybe it should be a part of the patch in [1].\n>\n>Yeah --- maybe this'd make sense as part of a full patch to improve\n>tab-complete.c's handling of case folding, but I'm suspicious that\n>applying it on its own would just make things less consistent.\n\nThanks for your reply. Merged this patch to [1]. Any further comment on [1] is very welcome.\n\n[1] https://www.postgresql.org/message-id/OS0PR01MB6113CA04E06D5BF221BC4FE2FB429%40OS0PR01MB6113.jpnprd01.prod.outlook.com\n\nRegards,\nTang\n\n\n", "msg_date": "Mon, 26 Apr 2021 13:48:55 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: use pg_strncasecmp to replace strncmp when compare \"pg_\"" } ]
[ { "msg_contents": "Hi,\n\nIs $SUBJECT intentional, or would it be desirable to add support it?\n\nExample:\n\nSELECT * FROM pg_catalog.pg_event_trigger;\n oid | evtname | evtevent | evtowner | evtfoid | evtenabled | evttags\n-----------+---------------+-----------------+----------+-----------+------------+---------\n289361636 | ddl_postgrest | ddl_command_end | 16696 | 289361635 | O |\n(1 row)\n\nSELECT * FROM pg_identify_object_as_address('pg_event_trigger'::regclass,289361636,0);\nERROR: requested object address for unsupported object class 32: text result \"ddl_postgrest\"\n\n/Joel\nHi,Is $SUBJECT intentional, or would it be desirable to add support it?Example:SELECT * FROM pg_catalog.pg_event_trigger;    oid    |    evtname    |    evtevent     | evtowner |  evtfoid  | evtenabled | evttags-----------+---------------+-----------------+----------+-----------+------------+---------289361636 | ddl_postgrest | ddl_command_end |    16696 | 289361635 | O          |(1 row)SELECT * FROM pg_identify_object_as_address('pg_event_trigger'::regclass,289361636,0);ERROR:  requested object address for unsupported object class 32: text result \"ddl_postgrest\"/Joel", "msg_date": "Thu, 22 Apr 2021 19:11:25 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "pg_identify_object_as_address() doesn't support pg_event_trigger oids" }, { "msg_contents": "On 2021-Apr-22, Joel Jacobson wrote:\n\n> Is $SUBJECT intentional, or would it be desirable to add support it?\n> \n> Example:\n> \n> SELECT * FROM pg_catalog.pg_event_trigger;\n> oid | evtname | evtevent | evtowner | evtfoid | evtenabled | evttags\n> -----------+---------------+-----------------+----------+-----------+------------+---------\n> 289361636 | ddl_postgrest | ddl_command_end | 16696 | 289361635 | O |\n> (1 row)\n> \n> SELECT * FROM pg_identify_object_as_address('pg_event_trigger'::regclass,289361636,0);\n> ERROR: requested object address for unsupported object class 32: text result \"ddl_postgrest\"\n\nHmm, I think this is an accidental omission and it should be supported.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"In Europe they call me Niklaus Wirth; in the US they call me Nickel's worth.\n That's because in Europe they call me by name, and in the US by value!\"\n\n\n", "msg_date": "Thu, 22 Apr 2021 13:32:48 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pg_identify_object_as_address() doesn't support pg_event_trigger\n oids" }, { "msg_contents": "On Thu, Apr 22, 2021, at 19:32, Alvaro Herrera wrote:\n> On 2021-Apr-22, Joel Jacobson wrote:\n> > SELECT * FROM pg_identify_object_as_address('pg_event_trigger'::regclass,289361636,0);\n> > ERROR: requested object address for unsupported object class 32: text result \"ddl_postgrest\"\n> \n> Hmm, I think this is an accidental omission and it should be supported.\n\nOh, I realise now the error came from a server running v13,\nbut there seems to be a problem in HEAD as well;\nthe \"object_names\" text[] output is empty for event triggers,\nso the output will be the same for all event triggers,\nwhich doesn't seem right since the output should be unique.\n\nThe output from the other functions pg_describe_object() and pg_identify_object()\ncontain the name in the output though.\n\nExample:\n\nSELECT\n *,\n pg_describe_object('pg_event_trigger'::regclass,oid,0),\n pg_identify_object('pg_event_trigger'::regclass,oid,0),\n pg_identify_object_as_address('pg_event_trigger'::regclass,oid,0)\nFROM pg_event_trigger;\n-[ RECORD 1 ]-----------------+-----------------------------------------------\noid | 396715\nevtname | ddl_postgrest\nevtevent | ddl_command_end\nevtowner | 10\nevtfoid | 396714\nevtenabled | O\nevttags |\npg_describe_object | event trigger ddl_postgrest\npg_identify_object | (\"event trigger\",,ddl_postgrest,ddl_postgrest)\npg_identify_object_as_address | (\"event trigger\",{},{})\n\nI therefore think the evtname should be added to object_names.\n\n/Joel\nOn Thu, Apr 22, 2021, at 19:32, Alvaro Herrera wrote:On 2021-Apr-22, Joel Jacobson wrote:> SELECT * FROM pg_identify_object_as_address('pg_event_trigger'::regclass,289361636,0);> ERROR:  requested object address for unsupported object class 32: text result \"ddl_postgrest\"Hmm, I think this is an accidental omission and it should be supported.Oh, I realise now the error came from a server running v13,but there seems to be a problem in HEAD as well;the \"object_names\" text[] output is empty for event triggers,so the output will be the same for all event triggers,which doesn't seem right since the output should be unique.The output from the other functions pg_describe_object() and pg_identify_object()contain the name in the output though.Example:SELECT  *,  pg_describe_object('pg_event_trigger'::regclass,oid,0),  pg_identify_object('pg_event_trigger'::regclass,oid,0),  pg_identify_object_as_address('pg_event_trigger'::regclass,oid,0)FROM pg_event_trigger;-[ RECORD 1 ]-----------------+-----------------------------------------------oid                           | 396715evtname                       | ddl_postgrestevtevent                      | ddl_command_endevtowner                      | 10evtfoid                       | 396714evtenabled                    | Oevttags                       |pg_describe_object            | event trigger ddl_postgrestpg_identify_object            | (\"event trigger\",,ddl_postgrest,ddl_postgrest)pg_identify_object_as_address | (\"event trigger\",{},{})I therefore think the evtname should be added to object_names./Joel", "msg_date": "Fri, 23 Apr 2021 08:54:45 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?Re:_pg=5Fidentify=5Fobject=5Fas=5Faddress()_doesn't_support_pg?=\n =?UTF-8?Q?=5Fevent=5Ftrigger_oids?=" }, { "msg_contents": "On Fri, Apr 23, 2021, at 08:54, Joel Jacobson wrote:\n> pg_describe_object | event trigger ddl_postgrest\n> pg_identify_object | (\"event trigger\",,ddl_postgrest,ddl_postgrest)\n> pg_identify_object_as_address | (\"event trigger\",{},{})\n> \n> I therefore think the evtname should be added to object_names.\n\nCould it possibly be as simple to fix as the attached patch?\nNot sure if the the string constructed by appendStringInfo() also needs to be adjusted.\n\nWith the patch, the example above returns:\n\npg_identify_object_as_address | (\"event trigger\",{ddl_postgrest},{})\n\n/Joel", "msg_date": "Fri, 23 Apr 2021 09:30:47 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?[PATCH]=C2=A0Re:_pg=5Fidentify=5Fobject=5Fas=5Faddress()_doesn?=\n =?UTF-8?Q?'t_support_pg=5Fevent=5Ftrigger_oids?=" }, { "msg_contents": "On Fri, Apr 23, 2021, at 09:30, Joel Jacobson wrote:\n> fix-pg_identify_object_as_address-for-event-triggers.patch\n\nAlso, since this is a problem also in v13 maybe this should also be back-ported?\nI think it's a bug since both pg_identify_object_as_address() and event triggers exists in v13,\nso the function should work there as well, otherwise users would need to do work-arounds for event triggers.\n\n/Joel\nOn Fri, Apr 23, 2021, at 09:30, Joel Jacobson wrote:fix-pg_identify_object_as_address-for-event-triggers.patchAlso, since this is a problem also in v13 maybe this should also be back-ported?I think it's a bug since both pg_identify_object_as_address() and event triggers exists in v13,so the function should work there as well, otherwise users would need to do work-arounds for event triggers./Joel", "msg_date": "Fri, 23 Apr 2021 09:33:36 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]=C2=A0Re:_pg=5Fidentify=5Fobject=5Fas=5Faddress()_d?=\n =?UTF-8?Q?oesn't_support_pg=5Fevent=5Ftrigger_oids?=" }, { "msg_contents": "On Thu, Apr 22, 2021, at 19:32, Alvaro Herrera wrote:\n> On 2021-Apr-22, Joel Jacobson wrote:\n> \n> > Is $SUBJECT intentional, or would it be desirable to add support it?\n> > \n> > Example:\n> > \n> > SELECT * FROM pg_catalog.pg_event_trigger;\n> > oid | evtname | evtevent | evtowner | evtfoid | evtenabled | evttags\n> > -----------+---------------+-----------------+----------+-----------+------------+---------\n> > 289361636 | ddl_postgrest | ddl_command_end | 16696 | 289361635 | O |\n> > (1 row)\n> > \n> > SELECT * FROM pg_identify_object_as_address('pg_event_trigger'::regclass,289361636,0);\n> > ERROR: requested object address for unsupported object class 32: text result \"ddl_postgrest\"\n> \n> Hmm, I think this is an accidental omission and it should be supported.\n\nI've added the patch to the commitfest and added you as a reviewer, hope that works.\n\n/Joel\nOn Thu, Apr 22, 2021, at 19:32, Alvaro Herrera wrote:On 2021-Apr-22, Joel Jacobson wrote:> Is $SUBJECT intentional, or would it be desirable to add support it?> > Example:> > SELECT * FROM pg_catalog.pg_event_trigger;>     oid    |    evtname    |    evtevent     | evtowner |  evtfoid  | evtenabled | evttags> -----------+---------------+-----------------+----------+-----------+------------+---------> 289361636 | ddl_postgrest | ddl_command_end |    16696 | 289361635 | O          |> (1 row)> > SELECT * FROM pg_identify_object_as_address('pg_event_trigger'::regclass,289361636,0);> ERROR:  requested object address for unsupported object class 32: text result \"ddl_postgrest\"Hmm, I think this is an accidental omission and it should be supported.I've added the patch to the commitfest and added you as a reviewer, hope that works./Joel", "msg_date": "Sat, 24 Apr 2021 08:08:04 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?Re:_pg=5Fidentify=5Fobject=5Fas=5Faddress()_doesn't_support_pg?=\n =?UTF-8?Q?=5Fevent=5Ftrigger_oids?=" }, { "msg_contents": "On Fri, Apr 23, 2021 at 09:33:36AM +0200, Joel Jacobson wrote:\n> Also, since this is a problem also in v13 maybe this should also be\n> back-ported? I think it's a bug since both\n> pg_identify_object_as_address() and event triggers exists in v13, so\n> the function should work there as well, otherwise users would need\n> to do work-arounds for event triggers. \n\nNo objections from here to do something in back-branches. We cannot\nhave a test for event triggers in object_address.sql and it would be\nbetter to keep it in a parallel set (see 676858b for example). Could\nyou however add a small test for that in event_trigger.sql? It would\nbe good to check after all three functions pg_identify_object(),\npg_identify_object_as_address() and pg_get_object_address().\n--\nMichael", "msg_date": "Mon, 26 Apr 2021 17:30:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: =?iso-8859-1?B?W1BBVENIXaBSZTogcGdf?=\n =?iso-8859-1?Q?identify=5Fobject=5Fas=5Faddress=28?= =?iso-8859-1?Q?=29?=\n doesn't support pg_event_trigger oids" }, { "msg_contents": "On Mon, Apr 26, 2021, at 10:30, Michael Paquier wrote:\n> On Fri, Apr 23, 2021 at 09:33:36AM +0200, Joel Jacobson wrote:\n> > Also, since this is a problem also in v13 maybe this should also be\n> > back-ported? I think it's a bug since both\n> > pg_identify_object_as_address() and event triggers exists in v13, so\n> > the function should work there as well, otherwise users would need\n> > to do work-arounds for event triggers. \n> \n> No objections from here to do something in back-branches. We cannot\n> have a test for event triggers in object_address.sql and it would be\n> better to keep it in a parallel set (see 676858b for example). Could\n> you however add a small test for that in event_trigger.sql? It would\n> be good to check after all three functions pg_identify_object(),\n> pg_identify_object_as_address() and pg_get_object_address().\n> --\n> Michael\n\nThanks for the guidance in how to test.\n\nI've added a test at the end of event_trigger.sql,\nreusing the three event triggers already in existence,\njust before they are dropped.\n\nNew patch attached.\n\n/Joel", "msg_date": "Tue, 27 Apr 2021 07:16:25 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]=C2=A0Re:_pg=5Fidentify=5Fobject=5Fas=5Faddress()_d?=\n =?UTF-8?Q?oesn't_support_pg=5Fevent=5Ftrigger_oids?=" }, { "msg_contents": "\nOn Tue, 27 Apr 2021 at 13:16, Joel Jacobson <joel@compiler.org> wrote:\n> On Mon, Apr 26, 2021, at 10:30, Michael Paquier wrote:\n>> On Fri, Apr 23, 2021 at 09:33:36AM +0200, Joel Jacobson wrote:\n>> > Also, since this is a problem also in v13 maybe this should also be\n>> > back-ported? I think it's a bug since both\n>> > pg_identify_object_as_address() and event triggers exists in v13, so\n>> > the function should work there as well, otherwise users would need\n>> > to do work-arounds for event triggers. \n>> \n>> No objections from here to do something in back-branches. We cannot\n>> have a test for event triggers in object_address.sql and it would be\n>> better to keep it in a parallel set (see 676858b for example). Could\n>> you however add a small test for that in event_trigger.sql? It would\n>> be good to check after all three functions pg_identify_object(),\n>> pg_identify_object_as_address() and pg_get_object_address().\n>> --\n>> Michael\n>\n> Thanks for the guidance in how to test.\n>\n> I've added a test at the end of event_trigger.sql,\n> reusing the three event triggers already in existence,\n> just before they are dropped.\n>\n> New patch attached.\n\nIMO we should add a space between the parameters to keep the code\nstyle consistently.\n\n+SELECT\n+ evtname,\n+ pg_describe_object('pg_event_trigger'::regclass,oid,0),\n+ pg_identify_object('pg_event_trigger'::regclass,oid,0),\n+ pg_identify_object_as_address('pg_event_trigger'::regclass,oid,0)\n+FROM pg_event_trigger\n+WHERE evtname IN ('start_rls_command','end_rls_command','sql_drop_command')\n+ORDER BY evtname;\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Tue, 27 Apr 2021 13:46:41 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: =?utf-8?Q?=5BPATCH=5D=C2=A0Re=3A?=\n pg_identify_object_as_address() doesn't support\n pg_event_trigger oids" }, { "msg_contents": "On Tue, Apr 27, 2021 at 07:16:25AM +0200, Joel Jacobson wrote:\n> I've added a test at the end of event_trigger.sql,\n> reusing the three event triggers already in existence,\n> just before they are dropped.\n\nCool, thanks. I have been looking at it and I'd still like to\ncross-check the output data of pg_get_object_address() to see if\npg_identify_object() remains consistent through it. See for example\nthe attached that uses a trick based on LATERAL, a bit different than\nwhat's done in object_address.sql but that gives the same amount of\ncoverage (I could also use two ROW()s and an equality, but well..).\n--\nMichael", "msg_date": "Tue, 27 Apr 2021 16:48:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: =?iso-8859-1?B?W1BBVENIXaBSZTogcGdf?=\n =?iso-8859-1?Q?identify=5Fobject=5Fas=5Faddress=28?= =?iso-8859-1?Q?=29?=\n doesn't support pg_event_trigger oids" }, { "msg_contents": "On Tue, Apr 27, 2021, at 09:48, Michael Paquier wrote:\n> On Tue, Apr 27, 2021 at 07:16:25AM +0200, Joel Jacobson wrote:\n> > I've added a test at the end of event_trigger.sql,\n> > reusing the three event triggers already in existence,\n> > just before they are dropped.\n> \n> Cool, thanks. I have been looking at it and I'd still like to\n> cross-check the output data of pg_get_object_address() to see if\n> pg_identify_object() remains consistent through it. See for example\n> the attached that uses a trick based on LATERAL, a bit different than\n> what's done in object_address.sql but that gives the same amount of\n> coverage (I could also use two ROW()s and an equality, but well..).\n\nNeat trick, looks good to me.\n\nI've successfully tested fix_event_trigger_pg_identify_object_as_address3.patch.\n\n/Joel\nOn Tue, Apr 27, 2021, at 09:48, Michael Paquier wrote:On Tue, Apr 27, 2021 at 07:16:25AM +0200, Joel Jacobson wrote:> I've added a test at the end of event_trigger.sql,> reusing the three event triggers already in existence,> just before they are dropped.Cool, thanks.  I have been looking at it and I'd still like tocross-check the output data of pg_get_object_address() to see ifpg_identify_object() remains consistent through it.  See for examplethe attached that uses a trick based on LATERAL, a bit different thanwhat's done in object_address.sql but that gives the same amount ofcoverage (I could also use two ROW()s and an equality, but well..).Neat trick, looks good to me.I've successfully tested fix_event_trigger_pg_identify_object_as_address3.patch./Joel", "msg_date": "Tue, 27 Apr 2021 14:33:36 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]=C2=A0Re:_pg=5Fidentify=5Fobject=5Fas=5Faddress()_d?=\n =?UTF-8?Q?oesn't_support_pg=5Fevent=5Ftrigger_oids?=" }, { "msg_contents": "On Tue, Apr 27, 2021 at 02:33:36PM +0200, Joel Jacobson wrote:\n> I've successfully tested fix_event_trigger_pg_identify_object_as_address3.patch.\n\nThanks. Applied down to 9.6 then.\n--\nMichael", "msg_date": "Wed, 28 Apr 2021 12:08:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: =?iso-8859-1?B?W1BBVENIXaBSZTogcGdf?=\n =?iso-8859-1?Q?identify=5Fobject=5Fas=5Faddress=28?= =?iso-8859-1?Q?=29?=\n doesn't support pg_event_trigger oids" } ]
[ { "msg_contents": "Hi,\n\nMichael Paquier (running locally I think), and subsequently Thomas Munro\n(noticing [1]), privately reported that they noticed an assertion failure in\nGetSnapshotData(). Both reasonably were wondering if that's related to the\nsnapshot scalability patches.\n\nMichael reported the following assertion failure in 023_pitr_prepared_xact.pl:\n> TRAP: FailedAssertion(\"TransactionIdPrecedesOrEquals(TransactionXmin, RecentXmin)\", File: \"procarray.c\", Line: 2468, PID: 22901)\n\n> The failure was triggered by one of the new TAP tests,\n> 023_pitr_prepared_xact.pl, after recovering a 2PC transaction that\n> used a transaction ID that matches with RecentXmin:\n> (gdb) p RecentXmin\n> $1 = 588\n> (gdb) p TransactionXmin\n> $2 = 589\n\nI tried for a while to reproduce that, but couldn't. Adding a bunch of\ndebugging output and increasing the log level shows the problem pretty clearly\nhowever, just not tripping any asserts:\n\n2021-04-21 17:55:54.287 PDT [1829098] [unknown] LOG: setting xmin to 588\n...\n2021-04-21 17:55:54.377 PDT [1829049] DEBUG: removing all KnownAssignedXids\n2021-04-21 17:55:54.377 PDT [1829049] DEBUG: release all standby locks\n...\n2021-04-21 17:55:54.396 PDT [1829100] [unknown] LOG: setting xmin to 589\n...\n2021-04-21 17:55:55.379 PDT [1829048] LOG: database system is ready to accept connections\n...\n2021-04-21 17:55:55.380 PDT [1829120] LOG: setting xmin to 588\n...\n2021-04-21 17:55:55.386 PDT [1829126] [unknown] LOG: setting xmin to 588\n2021-04-21 17:55:55.387 PDT [1829126] 023_pitr_prepared_xact.pl LOG: statement: COMMIT PREPARED 'fooinsert';\n...\n2021-04-21 17:55:55.428 PDT [1829128] [unknown] LOG: setting xmin to 589\n\nSo there's clear proof for xmin going from 588 to 589 and back and\nforth.\n\n\nAfter looking some more the bug isn't even that subtle. And definitely not new\n- likely it exists since the introduction of hot standby.\n\nThe sequence in StartupXLOG() leading to the issue is the following:\n\n1) RecoverPreparedTransactions();\n2) ShutdownRecoveryTransactionEnvironment();\n3) XLogCtl->SharedRecoveryState = RECOVERY_STATE_DONE;\n\nBecause 2) resets the KnownAssignedXids machinery, snapshots that happen\nbetween 2) and 3) will not actually look at the procarray to compute\nsnapshots, as that only happens within\n\n\tsnapshot->takenDuringRecovery = RecoveryInProgress();\n\tif (!snapshot->takenDuringRecovery)\n\nand RecoveryInProgress() checks XLogCtl->SharedRecoveryState !=\nRECOVERY_STATE_DONE, which is set in 3).\n\nSo snapshots within that window will always be \"empty\", i.e. xmin is\nlatestCompletedXid and xmax is latestCompletedXid + 1. Once we reach 3), we'll\nlook at the procarray, which then leads xmin going back to 588.\n\n\nI think that this can lead to data corruption, because a too new xmin horizon\ncould lead to rows from a prepared transaction getting hint bitted as dead (or\nperhaps even pruned, although that's probably harder to hit). Due to the too\nnew xmin horizon we won't treat rows by the prepared xact as in-progress, and\nTransactionIdDidCommit() will return false, as the transaction didn't commit\nyet. Which afaict can result in row versions created by the prepared\ntransaction being invisible even after the prepared transaction commits.\n\nWithout prepared transaction there probably isn't an issue, because there\nshouldn't be any other in-progress xids at that point?\n\n\nI think to fix the issue we'd have to move\nShutdownRecoveryTransactionEnvironment() to after XLogCtl->SharedRecoveryState\n= RECOVERY_STATE_DONE.\n\nThe acquisition of ProcArrayLock() in\nShutdownRecoveryTransactionEnvironment()->ExpireAllKnownAssignedTransactionIds()\nshould prevent the data from being removed between the RecoveryInProgress()\nand the KnownAssignedXidsGetAndSetXmin() calls in GetSnapshotData().\n\nI haven't yet figured out whether there would be a problem with deferring the\nother tasks in ShutdownRecoveryTransactionEnvironment() until after\nRECOVERY_STATE_DONE.\n\n\nI think we ought to introduce assertions that have a higher chance to\ncatch cases like this. The window to hit the new assertion that caused\nMichael to hit this is pretty darn small (xmin needs to move backwards\nbetween two snapshot computations inside a single transaction). I\n*think* we can safely assert that xmin doesn't move backwards globally,\nif we store it as a 64bit xid, and don't perform that check in\nwalsender?\n\nGreetings,\n\nAndres Freund\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-04-20%2003%3A04%3A04\n\n\n", "msg_date": "Thu, 22 Apr 2021 13:36:03 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Incorrect snapshots while promoting hot standby node when 2PC is used" }, { "msg_contents": "Hi Andres!\n\n> 23 апр. 2021 г., в 01:36, Andres Freund <andres@anarazel.de> написал(а):\n> \n> So snapshots within that window will always be \"empty\", i.e. xmin is\n> latestCompletedXid and xmax is latestCompletedXid + 1. Once we reach 3), we'll\n> look at the procarray, which then leads xmin going back to 588.\n> \n> \n> I think that this can lead to data corruption, because a too new xmin horizon\n> could lead to rows from a prepared transaction getting hint bitted as dead (or\n> perhaps even pruned, although that's probably harder to hit). Due to the too\n> new xmin horizon we won't treat rows by the prepared xact as in-progress, and\n> TransactionIdDidCommit() will return false, as the transaction didn't commit\n> yet. Which afaict can result in row versions created by the prepared\n> transaction being invisible even after the prepared transaction commits.\n> \n> Without prepared transaction there probably isn't an issue, because there\n> shouldn't be any other in-progress xids at that point?\n\nI'm investigating somewhat resemblant case.\nWe have an OLTP sharded installation where shards are almost always under rebalancing. Data movement is implemented with 2pc.\nSwitchover happens quite often due to datacenter drills. The installation is running on PostgreSQL 12.6.\n\nIn January heapcheck of backup reported some suspicious problems\nERROR: Page marked as all-frozen, but found non frozen tuple. Oid(relation)=18487, blkno(page)=1470240, offnum(tuple)=1\nERROR: Page marked as all-frozen, but found non frozen tuple. Oid(relation)=18487, blkno(page)=1470241, offnum(tuple)=1\nERROR: Page marked as all-frozen, but found non frozen tuple. Oid(relation)=18487, blkno(page)=1470242, offnum(tuple)=1\n...\nand so on for ~100 pages - tuples with lp==1 were not frozen.\n\nWe froze tuples with pg_dirty_hands and run VACUUM (DSIABLE_PAGE_SKIPPING) on the table.\n\nIn the end of the March the same shard stroke again with:\nERROR: Page marked as all-frozen, but found non frozen tuple. Oid(relation)=18487, blkno(page)=1470240, offnum(tuple)=42\n....\naround ~1040 blocks (starting from the same 1470240!) had non-frozen tuple at lp==42.\nI've run\nupdate s3.objects_65 set created = created where ctid = '(1470241,42)' returning *;\n\nAfter that heapcheck showed VM problem\nERROR: XX001: Found non all-visible tuple. Oid(relation)=18487, blkno(page)=1470240, offnum(tuple)=42\nLOCATION: collect_corrupt_items, heap_check.c:186\n\nVACUUM fixed it with warnings.\nWARNING: 01000: page is not marked all-visible but visibility map bit is set in relation \"objects_65\" page 1470240\nand failed on next page\nERROR: XX001: found xmin 1650436694 from before relfrozenxid 1752174172\nLOCATION: heap_prepare_freeze_tuple, heapam.c:6172\n\nI run update from all tuples in heapcheks ctid list and subsequent vacuum (without page skipping). This satisfied corruption monitoring.\n\n\nCan this case be related to the problem that you described?\n\nOr, perhaps, it looks more like a hardware problem? Data_checksums are on, but few years ago we observed ssd firmware that was loosing updates, but passing checksums. I'm sure that we would benefit from having separate relation fork for checksums or LSNs.\n\n\nWe observe similar cases 3-5 times a year. To the date no data was lost due to this, but it's somewhat annoying.\nBTW I'd say that such things are an argument for back-porting pg_surgery and heapcheck to old versions.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n\n\n", "msg_date": "Sat, 1 May 2021 17:35:09 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Incorrect snapshots while promoting hot standby node when 2PC is\n used" }, { "msg_contents": "Hi,\n\nOn 2021-05-01 17:35:09 +0500, Andrey Borodin wrote:\n> I'm investigating somewhat resemblant case.\n> We have an OLTP sharded installation where shards are almost always under rebalancing. Data movement is implemented with 2pc.\n> Switchover happens quite often due to datacenter drills. The installation is running on PostgreSQL 12.6.\n\nIf you still have the data it would be useful if you could check if the\nLSNs of the corrupted pages are LSNs from shortly after standby\npromotion/switchover?\n\n\n> Can this case be related to the problem that you described?\n\nIt seems possible, but it's hard to know without a lot more information.\n\n\n> Or, perhaps, it looks more like a hardware problem? Data_checksums are\n> on, but few years ago we observed ssd firmware that was loosing\n> updates, but passing checksums. I'm sure that we would benefit from\n> having separate relation fork for checksums or LSNs.\n\nRight - checksums are \"page local\". They can only detect if a page is\ncorrupted, not if e.g. an older version of the page (with correct\nchecksum) has been restored. While there are schemes to have stronger\nerror detection properties, they do come with substantial overhead (at\nleast the ones I can think of right now).\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 3 May 2021 11:10:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Incorrect snapshots while promoting hot standby node when 2PC is\n used" }, { "msg_contents": "\n\n> 3 мая 2021 г., в 23:10, Andres Freund <andres@anarazel.de> написал(а):\n> \n> Hi,\n> \n> On 2021-05-01 17:35:09 +0500, Andrey Borodin wrote:\n>> I'm investigating somewhat resemblant case.\n>> We have an OLTP sharded installation where shards are almost always under rebalancing. Data movement is implemented with 2pc.\n>> Switchover happens quite often due to datacenter drills. The installation is running on PostgreSQL 12.6.\n> \n> If you still have the data it would be useful if you could check if the\n> LSNs of the corrupted pages are LSNs from shortly after standby\n> promotion/switchover?\nThat's a neat idea, I'll check if I can restore backup with corruptions.\nI have a test cluster with corruptions, but it has undergone tens of switchovers...\n\n>> Or, perhaps, it looks more like a hardware problem? Data_checksums are\n>> on, but few years ago we observed ssd firmware that was loosing\n>> updates, but passing checksums. I'm sure that we would benefit from\n>> having separate relation fork for checksums or LSNs.\n> \n> Right - checksums are \"page local\". They can only detect if a page is\n> corrupted, not if e.g. an older version of the page (with correct\n> checksum) has been restored. While there are schemes to have stronger\n> error detection properties, they do come with substantial overhead (at\n> least the ones I can think of right now).\n\nWe can have PTRACK-like fork with page LSNs. It can be flushed on checkpoint and restored from WAL on crash. So we always can detect stale page version. Like LSN-track :) We will have much faster rewind and delta-backups for free.\n\nThough I don't think it worth an effort until we at least checksum CLOG.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Tue, 4 May 2021 11:58:36 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Incorrect snapshots while promoting hot standby node when 2PC is\n used" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Michael Paquier (running locally I think), and subsequently Thomas Munro\n> (noticing [1]), privately reported that they noticed an assertion failure in\n> GetSnapshotData(). Both reasonably were wondering if that's related to the\n> snapshot scalability patches.\n> Michael reported the following assertion failure in 023_pitr_prepared_xact.pl:\n>> TRAP: FailedAssertion(\"TransactionIdPrecedesOrEquals(TransactionXmin, RecentXmin)\", File: \"procarray.c\", Line: 2468, PID: 22901)\n\nmantid just showed a failure that looks like the same thing, at\nleast it's also in 023_pitr_prepared_xact.pl:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mantid&dt=2021-05-03%2013%3A07%3A06\n\nThe assertion line number is rather different though:\n\nTRAP: FailedAssertion(\"TransactionIdPrecedesOrEquals(TransactionXmin, RecentXmin)\", File: \"procarray.c\", Line: 2094, PID: 1163004)\n\nand interestingly, this happened in a parallel worker:\n\npostgres: node_pitr: parallel worker for PID 1162998 (ExceptionalCondition+0x7a)[0x946eca]\npostgres: node_pitr: parallel worker for PID 1162998 (GetSnapshotData+0x897)[0x7ef327]\npostgres: node_pitr: parallel worker for PID 1162998 (GetNonHistoricCatalogSnapshot+0x4a)[0x986a5a]\npostgres: node_pitr: parallel worker for PID 1162998 (systable_beginscan+0x189)[0x4fffe9]\npostgres: node_pitr: parallel worker for PID 1162998 [0x937336]\npostgres: node_pitr: parallel worker for PID 1162998 [0x937743]\npostgres: node_pitr: parallel worker for PID 1162998 (RelationIdGetRelation+0x85)[0x93f155]\npostgres: node_pitr: parallel worker for PID 1162998 (relation_open+0x5c)[0x4a348c]\npostgres: node_pitr: parallel worker for PID 1162998 (index_open+0x6)[0x5007a6]\npostgres: node_pitr: parallel worker for PID 1162998 (systable_beginscan+0x177)[0x4fffd7]\npostgres: node_pitr: parallel worker for PID 1162998 [0x937336]\npostgres: node_pitr: parallel worker for PID 1162998 [0x93e1c1]\npostgres: node_pitr: parallel worker for PID 1162998 (RelationIdGetRelation+0xbd)[0x93f18d]\npostgres: node_pitr: parallel worker for PID 1162998 (relation_open+0x5c)[0x4a348c]\npostgres: node_pitr: parallel worker for PID 1162998 (table_open+0x6)[0x532656]\npostgres: node_pitr: parallel worker for PID 1162998 [0x937306]\npostgres: node_pitr: parallel worker for PID 1162998 [0x93e1c1]\npostgres: node_pitr: parallel worker for PID 1162998 (RelationIdGetRelation+0xbd)[0x93f18d]\npostgres: node_pitr: parallel worker for PID 1162998 (relation_open+0x5c)[0x4a348c]\npostgres: node_pitr: parallel worker for PID 1162998 (table_open+0x6)[0x532656]\npostgres: node_pitr: parallel worker for PID 1162998 [0x92c4f1]\npostgres: node_pitr: parallel worker for PID 1162998 (SearchCatCache1+0x176)[0x92e236]\npostgres: node_pitr: parallel worker for PID 1162998 (TupleDescInitEntry+0xb3)[0x4a7eb3]\npostgres: node_pitr: parallel worker for PID 1162998 [0x688f30]\npostgres: node_pitr: parallel worker for PID 1162998 (ExecInitResultTupleSlotTL+0x1b)[0x68a8fb]\npostgres: node_pitr: parallel worker for PID 1162998 (ExecInitResult+0x92)[0x6af902]\npostgres: node_pitr: parallel worker for PID 1162998 (ExecInitNode+0x446)[0x684b06]\npostgres: node_pitr: parallel worker for PID 1162998 (standard_ExecutorStart+0x269)[0x67d7b9]\npostgres: node_pitr: parallel worker for PID 1162998 (ParallelQueryMain+0x1a3)[0x681d83]\npostgres: node_pitr: parallel worker for PID 1162998 (ParallelWorkerMain+0x408)[0x53c218]\npostgres: node_pitr: parallel worker for PID 1162998 (StartBackgroundWorker+0x23f)[0x774ebf]\npostgres: node_pitr: parallel worker for PID 1162998 [0x780f3d]\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 04 May 2021 12:32:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Incorrect snapshots while promoting hot standby node when 2PC is\n used" }, { "msg_contents": "Hi,\n\nOn 2021-05-04 12:32:34 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Michael Paquier (running locally I think), and subsequently Thomas Munro\n> > (noticing [1]), privately reported that they noticed an assertion failure in\n> > GetSnapshotData(). Both reasonably were wondering if that's related to the\n> > snapshot scalability patches.\n> > Michael reported the following assertion failure in 023_pitr_prepared_xact.pl:\n> >> TRAP: FailedAssertion(\"TransactionIdPrecedesOrEquals(TransactionXmin, RecentXmin)\", File: \"procarray.c\", Line: 2468, PID: 22901)\n> \n> mantid just showed a failure that looks like the same thing, at\n> least it's also in 023_pitr_prepared_xact.pl:\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mantid&dt=2021-05-03%2013%3A07%3A06\n> \n> The assertion line number is rather different though:\n> \n> TRAP: FailedAssertion(\"TransactionIdPrecedesOrEquals(TransactionXmin, RecentXmin)\", File: \"procarray.c\", Line: 2094, PID: 1163004)\n\nI managed to hit that one as well and it's also what fairywren hit - the\nassertion in 2094 and 2468 are basically copies of the same check, and\nwhich one hit is a question of timing.\n\n\n> and interestingly, this happened in a parallel worker:\n\nI think the issue can be hit (or rather detected) whenever a transaction\nbuilds one snapshot while in recovery, and a second one during\nend-of-recovery. The parallel query here is just\n2021-05-03 09:18:35.602 EDT [1162987:6] DETAIL: Failed process was running: SELECT pg_is_in_recovery() = 'f';\n(parallel due to force_parallel_mode) - which of course is likely to run\nduring end-of-recovery\n\nSo it does seem like the same bug of resetting the KnownAssignedXids\nstuff too early.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 4 May 2021 10:13:37 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Incorrect snapshots while promoting hot standby node when 2PC is\n used" }, { "msg_contents": "On Thu, Apr 22, 2021 at 01:36:03PM -0700, Andres Freund wrote:\n> The sequence in StartupXLOG() leading to the issue is the following:\n> \n> 1) RecoverPreparedTransactions();\n> 2) ShutdownRecoveryTransactionEnvironment();\n> 3) XLogCtl->SharedRecoveryState = RECOVERY_STATE_DONE;\n> \n> Because 2) resets the KnownAssignedXids machinery, snapshots that happen\n> between 2) and 3) will not actually look at the procarray to compute\n> snapshots, as that only happens within\n> \n> \tsnapshot->takenDuringRecovery = RecoveryInProgress();\n> \tif (!snapshot->takenDuringRecovery)\n> \n> and RecoveryInProgress() checks XLogCtl->SharedRecoveryState !=\n> RECOVERY_STATE_DONE, which is set in 3).\n\nOh, indeed. It is easy to see RecentXmin jumping back-and-worth while\nrunning 023_pitr_prepared_xact.pl with a small sleep added just after\nShutdownRecoveryTransactionEnvironment().\n\n> Without prepared transaction there probably isn't an issue, because there\n> shouldn't be any other in-progress xids at that point?\n\nYes, there should not be any as far as I recall. 2PC is kind of\nspecial with its fake ProcArray entries.\n\n> I think to fix the issue we'd have to move\n> ShutdownRecoveryTransactionEnvironment() to after XLogCtl->SharedRecoveryState\n> = RECOVERY_STATE_DONE.\n> \n> The acquisition of ProcArrayLock() in\n> ShutdownRecoveryTransactionEnvironment()->ExpireAllKnownAssignedTransactionIds()\n> should prevent the data from being removed between the RecoveryInProgress()\n> and the KnownAssignedXidsGetAndSetXmin() calls in GetSnapshotData().\n> \n> I haven't yet figured out whether there would be a problem with deferring the\n> other tasks in ShutdownRecoveryTransactionEnvironment() until after\n> RECOVERY_STATE_DONE.\n\nHmm. This would mean releasing all the exclusive locks tracked by a\nstandby, as of StandbyReleaseAllLocks(), after opening the instance\nfor writes after a promotion. I don't think that's unsafe, but it\nwould be intrusive.\n\nAnyway, isn't the issue ExpireAllKnownAssignedTransactionIds() itself,\nwhere we should try to not wipe out the 2PC entries to make sure that\nall those snapshots still see the 2PC transactions as something to\ncount on? I am attaching a crude patch to show the idea.\n\nThoughts?\n--\nMichael", "msg_date": "Wed, 26 May 2021 16:57:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Incorrect snapshots while promoting hot standby node when 2PC is\n used" }, { "msg_contents": "Hi,\n\nOn 2021-05-26 16:57:31 +0900, Michael Paquier wrote:\n> Yes, there should not be any as far as I recall. 2PC is kind of\n> special with its fake ProcArray entries.\n\nIt's really quite an awful design :(\n\n\n> > I think to fix the issue we'd have to move\n> > ShutdownRecoveryTransactionEnvironment() to after XLogCtl->SharedRecoveryState\n> > = RECOVERY_STATE_DONE.\n> >\n> > The acquisition of ProcArrayLock() in\n> > ShutdownRecoveryTransactionEnvironment()->ExpireAllKnownAssignedTransactionIds()\n> > should prevent the data from being removed between the RecoveryInProgress()\n> > and the KnownAssignedXidsGetAndSetXmin() calls in GetSnapshotData().\n> >\n> > I haven't yet figured out whether there would be a problem with deferring the\n> > other tasks in ShutdownRecoveryTransactionEnvironment() until after\n> > RECOVERY_STATE_DONE.\n>\n> Hmm. This would mean releasing all the exclusive locks tracked by a\n> standby, as of StandbyReleaseAllLocks(), after opening the instance\n> for writes after a promotion. I don't think that's unsafe, but it\n> would be intrusive.\n\nWhy would it be intrusive? We're talking a split second here, no? More\nimportantly, I don't think it's correct to release the locks at that\npoint.\n\n\n> Anyway, isn't the issue ExpireAllKnownAssignedTransactionIds() itself,\n> where we should try to not wipe out the 2PC entries to make sure that\n> all those snapshots still see the 2PC transactions as something to\n> count on? I am attaching a crude patch to show the idea.\n\nI don't think that's sufficient. We can't do most of the other stuff in\nShutdownRecoveryTransactionEnvironment() before changing\nXLogCtl->SharedRecoveryState either. As long as the other backends think\nwe are in recovery, we shouldn't release e.g. the virtual transaction.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 27 May 2021 10:01:49 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Incorrect snapshots while promoting hot standby node when 2PC is\n used" }, { "msg_contents": "On Thu, May 27, 2021 at 10:01:49AM -0700, Andres Freund wrote:\n> Why would it be intrusive? We're talking a split second here, no? More\n> importantly, I don't think it's correct to release the locks at that\n> point.\n\nI have been looking at all that for the last couple of days, and\nchecked the code to make sure that relying on RecoveryInProgress() as\nthe tipping point is logically correct in terms of virtual XID,\nsnapshot build and KnownAssignedXids cleanup. This stuff is tricky\nenough that I may have missed something, but my impression (and\ntesting) is that we should be safe.\n\nI am adding this patch to the next CF for now. More eyes are needed.\n--\nMichael", "msg_date": "Mon, 31 May 2021 21:37:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Incorrect snapshots while promoting hot standby node when 2PC is\n used" }, { "msg_contents": "On Mon, May 31, 2021 at 09:37:17PM +0900, Michael Paquier wrote:\n> I have been looking at all that for the last couple of days, and\n> checked the code to make sure that relying on RecoveryInProgress() as\n> the tipping point is logically correct in terms of virtual XID,\n> snapshot build and KnownAssignedXids cleanup. This stuff is tricky\n> enough that I may have missed something, but my impression (and\n> testing) is that we should be safe.\n\nA couple of months later, I have looked back at this thread and this\nreport. I have rechecked all the standby handling and snapshot builds\ninvolving KnownAssignedXids and looked at all the phases that are\ngetting called until we call ShutdownRecoveryTransactionEnvironment()\nto release these, and I don't think that there is a problem with the\nsolution proposed here. So I propose to move on and apply this\npatch. Please let me know if there are any objections.\n--\nMichael", "msg_date": "Fri, 1 Oct 2021 14:11:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Incorrect snapshots while promoting hot standby node when 2PC is\n used" }, { "msg_contents": "On Fri, Oct 01, 2021 at 02:11:15PM +0900, Michael Paquier wrote:\n> A couple of months later, I have looked back at this thread and this\n> report. I have rechecked all the standby handling and snapshot builds\n> involving KnownAssignedXids and looked at all the phases that are\n> getting called until we call ShutdownRecoveryTransactionEnvironment()\n> to release these, and I don't think that there is a problem with the\n> solution proposed here. So I propose to move on and apply this\n> patch. Please let me know if there are any objections.\n\nOkay, I have worked more on that today, did more tests and applied the\nfix as of 8a42379.\n--\nMichael", "msg_date": "Mon, 4 Oct 2021 17:27:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Incorrect snapshots while promoting hot standby node when 2PC is\n used" }, { "msg_contents": "On 2021-10-04 17:27:44 +0900, Michael Paquier wrote:\n> On Fri, Oct 01, 2021 at 02:11:15PM +0900, Michael Paquier wrote:\n> > A couple of months later, I have looked back at this thread and this\n> > report. I have rechecked all the standby handling and snapshot builds\n> > involving KnownAssignedXids and looked at all the phases that are\n> > getting called until we call ShutdownRecoveryTransactionEnvironment()\n> > to release these, and I don't think that there is a problem with the\n> > solution proposed here. So I propose to move on and apply this\n> > patch. Please let me know if there are any objections.\n> \n> Okay, I have worked more on that today, did more tests and applied the\n> fix as of 8a42379.\n\nThanks for remembering this!\n\n\n", "msg_date": "Tue, 5 Oct 2021 23:09:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Incorrect snapshots while promoting hot standby node when 2PC is\n used" } ]
[ { "msg_contents": "Would anyone oppose me pushing this for tab-completing the new keywords\nof\nALTER TABLE .. DETACH PARTITION?\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Por suerte hoy explot� el califont porque si no me habr�a muerto\n de aburrido\" (Papelucho)", "msg_date": "Thu, 22 Apr 2021 16:40:35 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "tab-complete for ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On Thu, Apr 22, 2021 at 04:40:35PM -0400, Alvaro Herrera wrote:\n> Would anyone oppose me pushing this for tab-completing the new keywords\n> of ALTER TABLE .. DETACH PARTITION?\n\n+1 to apply tab completion for v14\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 26 Apr 2021 09:24:34 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: tab-complete for ALTER TABLE .. DETACH PARTITION CONCURRENTLY" }, { "msg_contents": "On 2021-Apr-26, Justin Pryzby wrote:\n\n> On Thu, Apr 22, 2021 at 04:40:35PM -0400, Alvaro Herrera wrote:\n> > Would anyone oppose me pushing this for tab-completing the new keywords\n> > of ALTER TABLE .. DETACH PARTITION?\n> \n> +1 to apply tab completion for v14\n\nPushed.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Before you were born your parents weren't as boring as they are now. They\ngot that way paying your bills, cleaning up your room and listening to you\ntell them how idealistic you are.\" -- Charles J. Sykes' advice to teenagers\n\n\n", "msg_date": "Mon, 26 Apr 2021 16:22:31 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: tab-complete for ALTER TABLE .. DETACH PARTITION CONCURRENTLY" } ]
[ { "msg_contents": "\n\nOn 2021/04/23 0:36, Andres Freund wrote:\n> Hi\n> \n> On Thu, Apr 22, 2021, at 06:42, Fujii Masao wrote:\n>> On 2021/04/21 18:31, Masahiro Ikeda wrote:\n>>>> BTW, is it better to change PgStat_Counter from int64 to uint64 because> there aren't any counters with negative number?\n>> On second thought, it's ok even if the counters like wal_records get overflowed.\n>> Because they are always used to calculate the diff between the previous and\n>> current counters. Even when the current counters are overflowed and\n>> the previous ones are not, WalUsageAccumDiff() seems to return the right\n>> diff of them. If this understanding is right, I'd withdraw my comment and\n>> it's ok to use \"long\" type for those counters. Thought?\n> \n> Why long? It's of a platform dependent size, which doesn't seem useful here?\n\nI think it's ok to unify uint64. Although it's better to use small size for\ncache, the idea works well for only some platform which use 4bytes for \"long\".\n\n\n(Although I'm not familiar with the standardization...)\nIt seems that we need to adopt unsinged long if use \"long\", which may be 4bytes.\n\nI though WalUsageAccumDiff() seems to return the right value too. But, I\nresearched deeply and found that ISO/IEC 9899:1999 defines unsinged integer\nnever overflow(2.6.5 Types 9th section) although it doesn't define overflow of\nsigned integer type.\n\nIf my understanding is right, the definition only guarantee\nWalUsageAccumDiff() returns the right value only if the type is unsigned\ninteger. So, it's safe to change \"long\" to \"unsigned long\".\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n\n", "msg_date": "Fri, 23 Apr 2021 09:26:17 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "\n\nOn 2021/04/23 9:26, Masahiro Ikeda wrote:\n> \n> \n> On 2021/04/23 0:36, Andres Freund wrote:\n>> Hi\n>>\n>> On Thu, Apr 22, 2021, at 06:42, Fujii Masao wrote:\n>>> On 2021/04/21 18:31, Masahiro Ikeda wrote:\n>>>>> BTW, is it better to change PgStat_Counter from int64 to uint64 because> there aren't any counters with negative number?\n>>> On second thought, it's ok even if the counters like wal_records get overflowed.\n>>> Because they are always used to calculate the diff between the previous and\n>>> current counters. Even when the current counters are overflowed and\n>>> the previous ones are not, WalUsageAccumDiff() seems to return the right\n>>> diff of them. If this understanding is right, I'd withdraw my comment and\n>>> it's ok to use \"long\" type for those counters. Thought?\n>>\n>> Why long? It's of a platform dependent size, which doesn't seem useful here?\n\nI'm not sure why \"long\" was chosen for the counters in BufferUsage.\nAnd then I guess that maybe we didn't change that because using \"long\"\nfor them caused no actual issue in practice.\n\n\n> I think it's ok to unify uint64. Although it's better to use small size for\n> cache, the idea works well for only some platform which use 4bytes for \"long\".\n> \n> \n> (Although I'm not familiar with the standardization...)\n> It seems that we need to adopt unsinged long if use \"long\", which may be 4bytes.\n> \n> I though WalUsageAccumDiff() seems to return the right value too. But, I\n> researched deeply and found that ISO/IEC 9899:1999 defines unsinged integer\n> never overflow(2.6.5 Types 9th section) although it doesn't define overflow of\n> signed integer type.\n> \n> If my understanding is right, the definition only guarantee\n> WalUsageAccumDiff() returns the right value only if the type is unsigned\n> integer. So, it's safe to change \"long\" to \"unsigned long\".\n\nYes, we can change the counters so they use uint64. But if we do that,\nISTM that we need to change the code more than your patch does.\nFor example, even with the patch, pg_stat_statements uses Int64GetDatumFast()\nto report the counter like shared_blks_hit, but this should be changed?\nFor example, \"%ld\" should be changed to \"%llu\" at the following code in\nvacuumlazy.c? I think that we should check all codes that use the counters\nwhose types are changed to uint64.\n\n\t\t\tappendStringInfo(&buf,\n\t\t\t\t\t\t\t _(\"WAL usage: %ld records, %ld full page images, %llu bytes\"),\n\t\t\t\t\t\t\t walusage.wal_records,\n\t\t\t\t\t\t\t walusage.wal_fpi,\n\t\t\t\t\t\t\t (unsigned long long) walusage.wal_bytes);\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 23 Apr 2021 09:51:40 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "Hi,\n\nOn 2021-04-23 09:26:17 +0900, Masahiro Ikeda wrote:\n> On 2021/04/23 0:36, Andres Freund wrote:\n> > On Thu, Apr 22, 2021, at 06:42, Fujii Masao wrote:\n> >> On 2021/04/21 18:31, Masahiro Ikeda wrote:\n> >>>> BTW, is it better to change PgStat_Counter from int64 to uint64 because> there aren't any counters with negative number?\n> >> On second thought, it's ok even if the counters like wal_records get overflowed.\n> >> Because they are always used to calculate the diff between the previous and\n> >> current counters. Even when the current counters are overflowed and\n> >> the previous ones are not, WalUsageAccumDiff() seems to return the right\n> >> diff of them. If this understanding is right, I'd withdraw my comment and\n> >> it's ok to use \"long\" type for those counters. Thought?\n> > \n> > Why long? It's of a platform dependent size, which doesn't seem useful here?\n> \n> I think it's ok to unify uint64. Although it's better to use small size for\n> cache, the idea works well for only some platform which use 4bytes for \"long\".\n> \n> \n> (Although I'm not familiar with the standardization...)\n> It seems that we need to adopt unsinged long if use \"long\", which may be 4bytes.\n> \n> I though WalUsageAccumDiff() seems to return the right value too. But, I\n> researched deeply and found that ISO/IEC 9899:1999 defines unsinged integer\n> never overflow(2.6.5 Types 9th section) although it doesn't define overflow of\n> signed integer type.\n> \n> If my understanding is right, the definition only guarantee\n> WalUsageAccumDiff() returns the right value only if the type is unsigned\n> integer. So, it's safe to change \"long\" to \"unsigned long\".\n\nI think this should just use 64bit counters. Emitting more than 4\nbillion records in one transaction isn't actually particularly rare. And\nobviously WalUsageAccumDiff() can't do anything about that, once\noverflowed it overflowed.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 22 Apr 2021 18:25:53 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "\n\nOn 2021/04/23 10:25, Andres Freund wrote:\n> Hi,\n> \n> On 2021-04-23 09:26:17 +0900, Masahiro Ikeda wrote:\n>> On 2021/04/23 0:36, Andres Freund wrote:\n>>> On Thu, Apr 22, 2021, at 06:42, Fujii Masao wrote:\n>>>> On 2021/04/21 18:31, Masahiro Ikeda wrote:\n>>>>>> BTW, is it better to change PgStat_Counter from int64 to uint64 because> there aren't any counters with negative number?\n>>>> On second thought, it's ok even if the counters like wal_records get overflowed.\n>>>> Because they are always used to calculate the diff between the previous and\n>>>> current counters. Even when the current counters are overflowed and\n>>>> the previous ones are not, WalUsageAccumDiff() seems to return the right\n>>>> diff of them. If this understanding is right, I'd withdraw my comment and\n>>>> it's ok to use \"long\" type for those counters. Thought?\n>>>\n>>> Why long? It's of a platform dependent size, which doesn't seem useful here?\n>>\n>> I think it's ok to unify uint64. Although it's better to use small size for\n>> cache, the idea works well for only some platform which use 4bytes for \"long\".\n>>\n>>\n>> (Although I'm not familiar with the standardization...)\n>> It seems that we need to adopt unsinged long if use \"long\", which may be 4bytes.\n>>\n>> I though WalUsageAccumDiff() seems to return the right value too. But, I\n>> researched deeply and found that ISO/IEC 9899:1999 defines unsinged integer\n>> never overflow(2.6.5 Types 9th section) although it doesn't define overflow of\n>> signed integer type.\n>>\n>> If my understanding is right, the definition only guarantee\n>> WalUsageAccumDiff() returns the right value only if the type is unsigned\n>> integer. So, it's safe to change \"long\" to \"unsigned long\".\n> \n> I think this should just use 64bit counters. Emitting more than 4\n> billion records in one transaction isn't actually particularly rare. And\n\nRight. I agree to change the types of the counters to int64.\n\nI think that it's better to make this change as a separate patch from\nthe changes for pg_stat_wal view.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 23 Apr 2021 16:30:27 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "On 2021/04/23 16:30, Fujii Masao wrote:\n> \n> \n> On 2021/04/23 10:25, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2021-04-23 09:26:17 +0900, Masahiro Ikeda wrote:\n>>> On 2021/04/23 0:36, Andres Freund wrote:\n>>>> On Thu, Apr 22, 2021, at 06:42, Fujii Masao wrote:\n>>>>> On 2021/04/21 18:31, Masahiro Ikeda wrote:\n>>>>>>> BTW, is it better to change PgStat_Counter from int64 to uint64\n>>>>>>> because> there aren't any counters with negative number?\n>>>>> On second thought, it's ok even if the counters like wal_records get\n>>>>> overflowed.\n>>>>> Because they are always used to calculate the diff between the previous and\n>>>>> current counters. Even when the current counters are overflowed and\n>>>>> the previous ones are not, WalUsageAccumDiff() seems to return the right\n>>>>> diff of them. If this understanding is right, I'd withdraw my comment and\n>>>>> it's ok to use \"long\" type for those counters. Thought?\n>>>>\n>>>> Why long? It's of a platform dependent size, which doesn't seem useful here?\n>>>\n>>> I think it's ok to unify uint64. Although it's better to use small size for\n>>> cache, the idea works well for only some platform which use 4bytes for \"long\".\n>>>\n>>>\n>>> (Although I'm not familiar with the standardization...)\n>>> It seems that we need to adopt unsinged long if use \"long\", which may be\n>>> 4bytes.\n>>>\n>>> I though WalUsageAccumDiff() seems to return the right value too. But, I\n>>> researched deeply and found that ISO/IEC 9899:1999 defines unsinged integer\n>>> never overflow(2.6.5 Types 9th section) although it doesn't define overflow of\n>>> signed integer type.\n>>>\n>>> If my understanding is right, the definition only guarantee\n>>> WalUsageAccumDiff() returns the right value only if the type is unsigned\n>>> integer. So, it's safe to change \"long\" to \"unsigned long\".\n>>\n>> I think this should just use 64bit counters. Emitting more than 4\n>> billion records in one transaction isn't actually particularly rare. And\n> \n> Right. I agree to change the types of the counters to int64.\n> \n> I think that it's better to make this change as a separate patch from\n> the changes for pg_stat_wal view.\n\nThanks for your comments.\nOK, I separate two patches.\n\nFirst patch has only the changes for pg_stat_wal view.\n(\"v6-0001-performance-improvements-of-reporting-wal-stats-without-introducing-a-new-variable.patch\")\n\nSecond one has the changes for the type of the BufferUsage's and WalUsage's\nmembers. I change the type from long to int64. Is it better to make new thread?\n(\"v6-0002-change-the-data-type-of-XXXUsage-from-long-to-int64.patch\")\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Mon, 26 Apr 2021 10:11:39 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "\n\nOn 2021/04/26 10:11, Masahiro Ikeda wrote:\n> \n> \n> On 2021/04/23 16:30, Fujii Masao wrote:\n>>\n>>\n>> On 2021/04/23 10:25, Andres Freund wrote:\n>>> Hi,\n>>>\n>>> On 2021-04-23 09:26:17 +0900, Masahiro Ikeda wrote:\n>>>> On 2021/04/23 0:36, Andres Freund wrote:\n>>>>> On Thu, Apr 22, 2021, at 06:42, Fujii Masao wrote:\n>>>>>> On 2021/04/21 18:31, Masahiro Ikeda wrote:\n>>>>>>>> BTW, is it better to change PgStat_Counter from int64 to uint64\n>>>>>>>> because> there aren't any counters with negative number?\n>>>>>> On second thought, it's ok even if the counters like wal_records get\n>>>>>> overflowed.\n>>>>>> Because they are always used to calculate the diff between the previous and\n>>>>>> current counters. Even when the current counters are overflowed and\n>>>>>> the previous ones are not, WalUsageAccumDiff() seems to return the right\n>>>>>> diff of them. If this understanding is right, I'd withdraw my comment and\n>>>>>> it's ok to use \"long\" type for those counters. Thought?\n>>>>>\n>>>>> Why long? It's of a platform dependent size, which doesn't seem useful here?\n>>>>\n>>>> I think it's ok to unify uint64. Although it's better to use small size for\n>>>> cache, the idea works well for only some platform which use 4bytes for \"long\".\n>>>>\n>>>>\n>>>> (Although I'm not familiar with the standardization...)\n>>>> It seems that we need to adopt unsinged long if use \"long\", which may be\n>>>> 4bytes.\n>>>>\n>>>> I though WalUsageAccumDiff() seems to return the right value too. But, I\n>>>> researched deeply and found that ISO/IEC 9899:1999 defines unsinged integer\n>>>> never overflow(2.6.5 Types 9th section) although it doesn't define overflow of\n>>>> signed integer type.\n>>>>\n>>>> If my understanding is right, the definition only guarantee\n>>>> WalUsageAccumDiff() returns the right value only if the type is unsigned\n>>>> integer. So, it's safe to change \"long\" to \"unsigned long\".\n>>>\n>>> I think this should just use 64bit counters. Emitting more than 4\n>>> billion records in one transaction isn't actually particularly rare. And\n>>\n>> Right. I agree to change the types of the counters to int64.\n>>\n>> I think that it's better to make this change as a separate patch from\n>> the changes for pg_stat_wal view.\n> \n> Thanks for your comments.\n> OK, I separate two patches.\n\nThanks!\n\n\n> \n> First patch has only the changes for pg_stat_wal view.\n> (\"v6-0001-performance-improvements-of-reporting-wal-stats-without-introducing-a-new-variable.patch\")\n\n+\t\tpgWalUsage.wal_records == prevWalUsage.wal_records &&\n+\t\twalStats.wal_write == 0 && walStats.wal_sync == 0 &&\n\nWalStats.m_wal_write should be checked here instead of walStats.wal_write?\n\nIs there really the case where the number of sync is larger than zero when\nthe number of writes is zero? If not, it's enough to check only the number\nof writes?\n\n+\t * wal records weren't generated. So, the counters of 'wal_fpi',\n+\t * 'wal_bytes', 'm_wal_buffers_full' are not updated neither.\n\nIt's better to add the assertion check that confirms\nm_wal_buffers_full == 0 whenever wal_records is larger than zero?\n\n> \n> Second one has the changes for the type of the BufferUsage's and WalUsage's\n> members. I change the type from long to int64. Is it better to make new thread?\n> (\"v6-0002-change-the-data-type-of-XXXUsage-from-long-to-int64.patch\")\n\nWill review the patch later. I'm ok to discuss that in this thread.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 27 Apr 2021 21:56:20 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "\n\nOn 2021/04/27 21:56, Fujii Masao wrote:\n> \n> \n> On 2021/04/26 10:11, Masahiro Ikeda wrote:\n>>\n>> First patch has only the changes for pg_stat_wal view.\n>> (\"v6-0001-performance-improvements-of-reporting-wal-stats-without-introducing-a-new-variable.patch\")\n>>\n> \n> +        pgWalUsage.wal_records == prevWalUsage.wal_records &&\n> +        walStats.wal_write == 0 && walStats.wal_sync == 0 &&\n> > WalStats.m_wal_write should be checked here instead of walStats.wal_write?\n\nThanks! Yes, I'll fix it.\n\n\n> Is there really the case where the number of sync is larger than zero when\n> the number of writes is zero? If not, it's enough to check only the number\n> of writes?\n\nI thought that there is the case if \"wal_sync_method\" is fdatasync, fsync or\nfsync_writethrough. The example case is following.\n\n(1) backend-1 writes the wal data because wal buffer has no space. But, it\ndoesn't sync the wal data.\n(2) backend-2 reads data pages. In the execution, it need to write and sync\nthe wal because dirty pages is selected as victim pages. backend-2 need to\nonly sync the wal data because the wal data were already written by backend-1,\nbut they weren't synced.\n\nI'm ok to change it since it's rare case.\n\n\n> +     * wal records weren't generated. So, the counters of 'wal_fpi',\n> +     * 'wal_bytes', 'm_wal_buffers_full' are not updated neither.\n> \n> It's better to add the assertion check that confirms\n> m_wal_buffers_full == 0 whenever wal_records is larger than zero?\n\nSorry, I couldn't understand yet. I thought that m_wal_buffers_full can be\nlarger than 0 if wal_records > 0.\n\nDo you suggest that the following assertion is needed?\n\n- if (memcmp(&WalStats, &all_zeroes, sizeof(PgStat_MsgWal)) == 0)\n- return false;\n+ if (pgWalUsage.wal_records == prevWalUsage.wal_records &&\n+ WalStats.m_wal_write == 0 && WalStats.m_wal_sync == 0)\n+ {\n+ Assert(pgWalUsage.wal_fpi == 0 && pgWalUsage.wal_bytes &&\n+ WalStats.m_wal_buffers_full == 0 &&\nWalStats.m_wal_write_time == 0 &&\n+ WalStats.m_wal_sync_time == 0);\n+ return;\n+ }\n\n\n>> Second one has the changes for the type of the BufferUsage's and WalUsage's\n>> members. I change the type from long to int64. Is it better to make new thread?\n>> (\"v6-0002-change-the-data-type-of-XXXUsage-from-long-to-int64.patch\")\n> \n> Will review the patch later. I'm ok to discuss that in this thread.\n\nThanks!\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 28 Apr 2021 09:10:21 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "\n\nOn 2021/04/28 9:10, Masahiro Ikeda wrote:\n> \n> \n> On 2021/04/27 21:56, Fujii Masao wrote:\n>>\n>>\n>> On 2021/04/26 10:11, Masahiro Ikeda wrote:\n>>>\n>>> First patch has only the changes for pg_stat_wal view.\n>>> (\"v6-0001-performance-improvements-of-reporting-wal-stats-without-introducing-a-new-variable.patch\")\n>>>\n>>\n>> +        pgWalUsage.wal_records == prevWalUsage.wal_records &&\n>> +        walStats.wal_write == 0 && walStats.wal_sync == 0 &&\n>>> WalStats.m_wal_write should be checked here instead of walStats.wal_write?\n> \n> Thanks! Yes, I'll fix it.\n\nThanks!\n\n\n> \n> \n>> Is there really the case where the number of sync is larger than zero when\n>> the number of writes is zero? If not, it's enough to check only the number\n>> of writes?\n> \n> I thought that there is the case if \"wal_sync_method\" is fdatasync, fsync or\n> fsync_writethrough. The example case is following.\n> \n> (1) backend-1 writes the wal data because wal buffer has no space. But, it\n> doesn't sync the wal data.\n> (2) backend-2 reads data pages. In the execution, it need to write and sync\n> the wal because dirty pages is selected as victim pages. backend-2 need to\n> only sync the wal data because the wal data were already written by backend-1,\n> but they weren't synced.\n\nYou're right. So let's leave the check of \"m_wal_sync == 0\".\n\n\n> \n> I'm ok to change it since it's rare case.\n> \n> \n>> +     * wal records weren't generated. So, the counters of 'wal_fpi',\n>> +     * 'wal_bytes', 'm_wal_buffers_full' are not updated neither.\n>>\n>> It's better to add the assertion check that confirms\n>> m_wal_buffers_full == 0 whenever wal_records is larger than zero?\n> \n> Sorry, I couldn't understand yet. I thought that m_wal_buffers_full can be\n> larger than 0 if wal_records > 0.\n> \n> Do you suggest that the following assertion is needed?\n> \n> - if (memcmp(&WalStats, &all_zeroes, sizeof(PgStat_MsgWal)) == 0)\n> - return false;\n> + if (pgWalUsage.wal_records == prevWalUsage.wal_records &&\n> + WalStats.m_wal_write == 0 && WalStats.m_wal_sync == 0)\n> + {\n> + Assert(pgWalUsage.wal_fpi == 0 && pgWalUsage.wal_bytes &&\n> + WalStats.m_wal_buffers_full == 0 &&\n> WalStats.m_wal_write_time == 0 &&\n> + WalStats.m_wal_sync_time == 0);\n> + return;\n> + }\n\nI was thinking to add the \"Assert(WalStats.m_wal_buffers_full)\" as a safe-guard\nbecause only m_wal_buffers_full is incremented in different places where\nwal_records, m_wal_write and m_wal_sync are incremented.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 11 May 2021 16:44:49 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "\n\nOn 2021/04/28 9:10, Masahiro Ikeda wrote:\n>>> Second one has the changes for the type of the BufferUsage's and WalUsage's\n>>> members. I change the type from long to int64. Is it better to make new thread?\n>>> (\"v6-0002-change-the-data-type-of-XXXUsage-from-long-to-int64.patch\")\n>>\n>> Will review the patch later. I'm ok to discuss that in this thread.\n> \n> Thanks!\n\n0002 patch looks good to me.\nI think we can commit this at first. Barring any objection, I will do that.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 11 May 2021 17:25:58 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "On 2021/05/11 16:44, Fujii Masao wrote:\n> \n> \n> On 2021/04/28 9:10, Masahiro Ikeda wrote:\n>>\n>>\n>> On 2021/04/27 21:56, Fujii Masao wrote:\n>>>\n>>>\n>>> On 2021/04/26 10:11, Masahiro Ikeda wrote:\n>>>>\n>>>> First patch has only the changes for pg_stat_wal view.\n>>>> (\"v6-0001-performance-improvements-of-reporting-wal-stats-without-introducing-a-new-variable.patch\")\n>>>>\n>>>>\n>>>\n>>> +        pgWalUsage.wal_records == prevWalUsage.wal_records &&\n>>> +        walStats.wal_write == 0 && walStats.wal_sync == 0 &&\n>>>> WalStats.m_wal_write should be checked here instead of walStats.wal_write?\n>>\n>> Thanks! Yes, I'll fix it.\n> \n> Thanks!\n\nThanks for your comments!\nI fixed them.\n\n>>> Is there really the case where the number of sync is larger than zero when\n>>> the number of writes is zero? If not, it's enough to check only the number\n>>> of writes?\n>>\n>> I thought that there is the case if \"wal_sync_method\" is fdatasync, fsync or\n>> fsync_writethrough. The example case is following.\n>>\n>> (1) backend-1 writes the wal data because wal buffer has no space. But, it\n>> doesn't sync the wal data.\n>> (2) backend-2 reads data pages. In the execution, it need to write and sync\n>> the wal because dirty pages is selected as victim pages. backend-2 need to\n>> only sync the wal data because the wal data were already written by backend-1,\n>> but they weren't synced.\n> \n> You're right. So let's leave the check of \"m_wal_sync == 0\".\n\nOK.\n\n>>> +     * wal records weren't generated. So, the counters of 'wal_fpi',\n>>> +     * 'wal_bytes', 'm_wal_buffers_full' are not updated neither.\n>>>\n>>> It's better to add the assertion check that confirms\n>>> m_wal_buffers_full == 0 whenever wal_records is larger than zero?\n>>\n>> Sorry, I couldn't understand yet. I thought that m_wal_buffers_full can be\n>> larger than 0 if wal_records > 0.\n>>\n>> Do you suggest that the following assertion is needed?\n>>\n>> -       if (memcmp(&WalStats, &all_zeroes, sizeof(PgStat_MsgWal)) == 0)\n>> -               return false;\n>> +       if (pgWalUsage.wal_records == prevWalUsage.wal_records &&\n>> +               WalStats.m_wal_write == 0 && WalStats.m_wal_sync == 0)\n>> +       {\n>> +               Assert(pgWalUsage.wal_fpi == 0 && pgWalUsage.wal_bytes &&\n>> +                               WalStats.m_wal_buffers_full == 0 &&\n>> WalStats.m_wal_write_time == 0 &&\n>> +                               WalStats.m_wal_sync_time == 0);\n>> +               return;\n>> +       }\n> \n> I was thinking to add the \"Assert(WalStats.m_wal_buffers_full)\" as a safe-guard\n> because only m_wal_buffers_full is incremented in different places where\n> wal_records, m_wal_write and m_wal_sync are incremented.\n\nUnderstood. I added the assertion for m_wal_buffers_full only.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Tue, 11 May 2021 18:46:21 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "\n\nOn 2021/05/11 18:46, Masahiro Ikeda wrote:\n> \n> \n> On 2021/05/11 16:44, Fujii Masao wrote:\n>>\n>>\n>> On 2021/04/28 9:10, Masahiro Ikeda wrote:\n>>>\n>>>\n>>> On 2021/04/27 21:56, Fujii Masao wrote:\n>>>>\n>>>>\n>>>> On 2021/04/26 10:11, Masahiro Ikeda wrote:\n>>>>>\n>>>>> First patch has only the changes for pg_stat_wal view.\n>>>>> (\"v6-0001-performance-improvements-of-reporting-wal-stats-without-introducing-a-new-variable.patch\")\n>>>>>\n>>>>>\n>>>>\n>>>> +        pgWalUsage.wal_records == prevWalUsage.wal_records &&\n>>>> +        walStats.wal_write == 0 && walStats.wal_sync == 0 &&\n>>>>> WalStats.m_wal_write should be checked here instead of walStats.wal_write?\n>>>\n>>> Thanks! Yes, I'll fix it.\n>>\n>> Thanks!\n> \n> Thanks for your comments!\n> I fixed them.\n\nThanks for updating the patch!\n\n \tif ((pgStatTabList == NULL || pgStatTabList->tsa_used == 0) &&\n \t\tpgStatXactCommit == 0 && pgStatXactRollback == 0 &&\n+\t\tpgWalUsage.wal_records == prevWalUsage.wal_records &&\n+\t\tWalStats.m_wal_write == 0 && WalStats.m_wal_sync == 0 &&\n\nI'm just wondering if the above WAL activity counters need to be checked.\nMaybe it's not necessary because \"pgStatXactCommit == 0 && pgStatXactRollback == 0\"\nis checked? IOW, is there really the case where WAL activity counters are updated\neven when both pgStatXactCommit and pgStatXactRollback are zero?\n\n\n+\tif (pgWalUsage.wal_records != prevWalUsage.wal_records)\n+\t{\n+\t\tWalUsage\twalusage;\n+\n+\t\t/*\n+\t\t * Calculate how much WAL usage counters were increased by substracting\n+\t\t * the previous counters from the current ones. Fill the results in\n+\t\t * WAL stats message.\n+\t\t */\n+\t\tMemSet(&walusage, 0, sizeof(WalUsage));\n+\t\tWalUsageAccumDiff(&walusage, &pgWalUsage, &prevWalUsage);\n\nIsn't it better to move the code \"prevWalUsage = pgWalUsage\" into here?\nBecause it's necessary only when pgWalUsage.wal_records != prevWalUsage.wal_records.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 12 May 2021 19:19:34 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "On 2021/05/12 19:19, Fujii Masao wrote:\n> \n> \n> On 2021/05/11 18:46, Masahiro Ikeda wrote:\n>>\n>>\n>> On 2021/05/11 16:44, Fujii Masao wrote:\n>>>\n>>>\n>>> On 2021/04/28 9:10, Masahiro Ikeda wrote:\n>>>>\n>>>>\n>>>> On 2021/04/27 21:56, Fujii Masao wrote:\n>>>>>\n>>>>>\n>>>>> On 2021/04/26 10:11, Masahiro Ikeda wrote:\n>>>>>>\n>>>>>> First patch has only the changes for pg_stat_wal view.\n>>>>>> (\"v6-0001-performance-improvements-of-reporting-wal-stats-without-introducing-a-new-variable.patch\")\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>\n>>>>> +        pgWalUsage.wal_records == prevWalUsage.wal_records &&\n>>>>> +        walStats.wal_write == 0 && walStats.wal_sync == 0 &&\n>>>>>> WalStats.m_wal_write should be checked here instead of walStats.wal_write?\n>>>>\n>>>> Thanks! Yes, I'll fix it.\n>>>\n>>> Thanks!\n>>\n>> Thanks for your comments!\n>> I fixed them.\n> \n> Thanks for updating the patch!\n> \n>      if ((pgStatTabList == NULL || pgStatTabList->tsa_used == 0) &&\n>          pgStatXactCommit == 0 && pgStatXactRollback == 0 &&\n> +        pgWalUsage.wal_records == prevWalUsage.wal_records &&\n> +        WalStats.m_wal_write == 0 && WalStats.m_wal_sync == 0 &&\n> \n> I'm just wondering if the above WAL activity counters need to be checked.\n> Maybe it's not necessary because \"pgStatXactCommit == 0 && pgStatXactRollback\n> == 0\"\n> is checked? IOW, is there really the case where WAL activity counters are updated\n> even when both pgStatXactCommit and pgStatXactRollback are zero?\n\nThanks for checking.\n\nI haven't found the case yet. But, I added the condition because there is a\ndiscussion that it's safer.\n\n(It seems the mail thread chain is broken, Sorry...)\nI copy the discussion at the time below.\n\nhttps://www.postgresql.org/message-id/20210330.172843.267174731834281075.horikyota.ntt%40gmail.com\n>>>> 3) Doing if (memcmp(&WalStats, &all_zeroes, sizeof(PgStat_MsgWal)) == 0)\n>>>> just to figure out if there's been any changes isn't all that\n>>>> cheap. This is regularly exercised in read-only workloads too. Seems\n>>>> adding a boolean WalStats.have_pending = true or such would be\n>>>> better.\n>>>> 4) For plain backends pgstat_report_wal() is called by\n>>>> pgstat_report_stat() - but it is not checked as part of the \"Don't\n>>>> expend a clock check if nothing to do\" check at the top. It's\n>>>> probably rare to have pending wal stats without also passing one of\n>>>> the other conditions, but ...\n>>>\n>>> I added the logic to check if the stats counters are updated or not in\n>>> pgstat_report_stat() using not only generated wal record but also write/sync\n>>> counters, and it can skip to call reporting function.\n>>\n>> I removed the checking code whether the wal stats counters were updated or not\n>> in pgstat_report_stat() since I couldn't understand the valid case yet.\n>> pgstat_report_stat() is called by backends when the transaction is finished.\n>> This means that the condition seems always pass.\n>\n> Doesn't the same holds for all other counters? If you are saying that\n> \"wal counters should be zero if all other stats counters are zero\", we\n> need an assertion to check that and a comment to explain that.\n>\n> I personally find it safer to add the WAL-stats condition to the\n> fast-return check, rather than addin such assertion.\n\n\n> +    if (pgWalUsage.wal_records != prevWalUsage.wal_records)\n> +    {\n> +        WalUsage    walusage;\n> +\n> +        /*\n> +         * Calculate how much WAL usage counters were increased by substracting\n> +         * the previous counters from the current ones. Fill the results in\n> +         * WAL stats message.\n> +         */\n> +        MemSet(&walusage, 0, sizeof(WalUsage));\n> +        WalUsageAccumDiff(&walusage, &pgWalUsage, &prevWalUsage);\n> \n> Isn't it better to move the code \"prevWalUsage = pgWalUsage\" into here?\n> Because it's necessary only when pgWalUsage.wal_records !=\n> prevWalUsage.wal_records.\n\nYes, I fixed it.\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Thu, 13 May 2021 09:05:37 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "On 2021-05-13 09:05, Masahiro Ikeda wrote:\n> On 2021/05/12 19:19, Fujii Masao wrote:\n>> \n>> \n>> On 2021/05/11 18:46, Masahiro Ikeda wrote:\n>>> \n>>> \n>>> On 2021/05/11 16:44, Fujii Masao wrote:\n>>>> \n>>>> \n>>>> On 2021/04/28 9:10, Masahiro Ikeda wrote:\n>>>>> \n>>>>> \n>>>>> On 2021/04/27 21:56, Fujii Masao wrote:\n>>>>>> \n>>>>>> \n>>>>>> On 2021/04/26 10:11, Masahiro Ikeda wrote:\n>>>>>>> \n>>>>>>> First patch has only the changes for pg_stat_wal view.\n>>>>>>> (\"v6-0001-performance-improvements-of-reporting-wal-stats-without-introducing-a-new-variable.patch\")\n>>>>>>> \n>>>>>>> \n>>>>>>> \n>>>>>> \n>>>>>> +        pgWalUsage.wal_records == prevWalUsage.wal_records &&\n>>>>>> +        walStats.wal_write == 0 && walStats.wal_sync == 0 &&\n>>>>>>> WalStats.m_wal_write should be checked here instead of \n>>>>>>> walStats.wal_write?\n>>>>> \n>>>>> Thanks! Yes, I'll fix it.\n>>>> \n>>>> Thanks!\n>>> \n>>> Thanks for your comments!\n>>> I fixed them.\n>> \n>> Thanks for updating the patch!\n>> \n>>      if ((pgStatTabList == NULL || pgStatTabList->tsa_used == 0) &&\n>>          pgStatXactCommit == 0 && pgStatXactRollback == 0 &&\n>> +        pgWalUsage.wal_records == prevWalUsage.wal_records &&\n>> +        WalStats.m_wal_write == 0 && WalStats.m_wal_sync == 0 &&\n>> \n>> I'm just wondering if the above WAL activity counters need to be \n>> checked.\n>> Maybe it's not necessary because \"pgStatXactCommit == 0 && \n>> pgStatXactRollback\n>> == 0\"\n>> is checked? IOW, is there really the case where WAL activity counters \n>> are updated\n>> even when both pgStatXactCommit and pgStatXactRollback are zero?\n> \n> Thanks for checking.\n> \n> I haven't found the case yet. But, I added the condition because there \n> is a\n> discussion that it's safer.\n> \n> (It seems the mail thread chain is broken, Sorry...)\n> I copy the discussion at the time below.\n> \n> https://www.postgresql.org/message-id/20210330.172843.267174731834281075.horikyota.ntt%40gmail.com\n>>>>> 3) Doing if (memcmp(&WalStats, &all_zeroes, sizeof(PgStat_MsgWal)) \n>>>>> == 0)\n>>>>> just to figure out if there's been any changes isn't all that\n>>>>> cheap. This is regularly exercised in read-only workloads too. \n>>>>> Seems\n>>>>> adding a boolean WalStats.have_pending = true or such would be\n>>>>> better.\n>>>>> 4) For plain backends pgstat_report_wal() is called by\n>>>>> pgstat_report_stat() - but it is not checked as part of the \n>>>>> \"Don't\n>>>>> expend a clock check if nothing to do\" check at the top. It's\n>>>>> probably rare to have pending wal stats without also passing one \n>>>>> of\n>>>>> the other conditions, but ...\n>>>> \n>>>> I added the logic to check if the stats counters are updated or not \n>>>> in\n>>>> pgstat_report_stat() using not only generated wal record but also \n>>>> write/sync\n>>>> counters, and it can skip to call reporting function.\n>>> \n>>> I removed the checking code whether the wal stats counters were \n>>> updated or not\n>>> in pgstat_report_stat() since I couldn't understand the valid case \n>>> yet.\n>>> pgstat_report_stat() is called by backends when the transaction is \n>>> finished.\n>>> This means that the condition seems always pass.\n>> \n>> Doesn't the same holds for all other counters? If you are saying that\n>> \"wal counters should be zero if all other stats counters are zero\", we\n>> need an assertion to check that and a comment to explain that.\n>> \n>> I personally find it safer to add the WAL-stats condition to the\n>> fast-return check, rather than addin such assertion.\n> \n> \n>> +    if (pgWalUsage.wal_records != prevWalUsage.wal_records)\n>> +    {\n>> +        WalUsage    walusage;\n>> +\n>> +        /*\n>> +         * Calculate how much WAL usage counters were increased by \n>> substracting\n>> +         * the previous counters from the current ones. Fill the \n>> results in\n>> +         * WAL stats message.\n>> +         */\n>> +        MemSet(&walusage, 0, sizeof(WalUsage));\n>> +        WalUsageAccumDiff(&walusage, &pgWalUsage, &prevWalUsage);\n>> \n>> Isn't it better to move the code \"prevWalUsage = pgWalUsage\" into \n>> here?\n>> Because it's necessary only when pgWalUsage.wal_records !=\n>> prevWalUsage.wal_records.\n> \n> Yes, I fixed it.\n> \n> \n> Regards,\n\nThanks for updating the patch!\n\n> +\t * is executed, wal records aren't nomally generated (although HOT \n> makes\n\nnomally -> normally?\n\n> +\t * It's not enough to check the number of generated wal records, for\n> +\t * example the walwriter may write/sync the WAL although it doesn't\n\nYou use both 'wal' and 'WAL' in the comments, but are there any \nintension?\n\nRegards,\n\n\n", "msg_date": "Mon, 17 May 2021 16:07:31 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "Thanks for your comments!\n\n>> +     * is executed, wal records aren't nomally generated (although HOT makes\n> \n> nomally -> normally?\n\nYes, fixed.\n\n>> +     * It's not enough to check the number of generated wal records, for\n>> +     * example the walwriter may write/sync the WAL although it doesn't\n> \n> You use both 'wal' and 'WAL' in the comments, but are there any intension?\n\nNo, I changed to use 'WAL' only. Although some other comments have 'wal' and\n'WAL', it seems that 'WAL' is often used.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Mon, 17 May 2021 16:39:51 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "\n\nOn 2021/05/17 16:39, Masahiro Ikeda wrote:\n> \n> Thanks for your comments!\n> \n>>> +     * is executed, wal records aren't nomally generated (although HOT makes\n>>\n>> nomally -> normally?\n> \n> Yes, fixed.\n> \n>>> +     * It's not enough to check the number of generated wal records, for\n>>> +     * example the walwriter may write/sync the WAL although it doesn't\n>>\n>> You use both 'wal' and 'WAL' in the comments, but are there any intension?\n> \n> No, I changed to use 'WAL' only. Although some other comments have 'wal' and\n> 'WAL', it seems that 'WAL' is often used.\n\nThanks for updating the patch!\n\n+ * Buffer and generated WAL usage counters.\n+ *\n+ * The counters are accumulated values. There are infrastructures\n+ * to add the values and calculate the difference within a specific period.\n\nIs it really worth adding these comments here?\n\n+\t * Note: regarding to WAL statistics counters, it's not enough to check\n+\t * only whether the WAL record is generated or not. If a read transaction\n+\t * is executed, WAL records aren't normally generated (although HOT makes\n+\t * WAL records). But, just writes and syncs the WAL data may happen when\n+\t * to flush buffers.\n\nAren't the following comments better?\n\n------------------------------\nTo determine whether any WAL activity has occurred since last time, not only the number of generated WAL records but also the numbers of WAL writes and syncs need to be checked. Because even transaction that generates no WAL records can write or sync WAL data when flushing the data pages.\n------------------------------\n\n-\t * This function can be called even if nothing at all has happened. In\n-\t * this case, avoid sending a completely empty message to the stats\n-\t * collector.\n\nI think that it's better to leave this comment as it is.\n\n+\t * First, to check the WAL stats counters were updated.\n+\t *\n+\t * Even if the 'force' is true, we don't need to send the stats if the\n+\t * counters were not updated.\n+\t *\n+\t * We can know whether the counters were updated or not to check only\n+\t * three counters. So, for performance, we don't allocate another memory\n+\t * spaces and check the all stats like pgstat_send_slru().\n\nIs it really worth adding these comments here?\n\n+\t * The current 'wal_records' is the same as the previous one means that\n+\t * WAL records weren't generated. So, the counters of 'wal_fpi',\n+\t * 'wal_bytes', 'm_wal_buffers_full' are not updated neither.\n+\t *\n+\t * It's not enough to check the number of generated WAL records, for\n+\t * example the walwriter may write/sync the WAL although it doesn't\n+\t * generate WAL records. 'm_wal_write' and 'm_wal_sync' are zero means the\n+\t * counters of time spent are zero too.\n\nAren't the following comments better?\n\n------------------------\nCheck wal_records counter to determine whether any WAL activity has happened since last time. Note that other WalUsage counters don't need to be checked because they are incremented always together with wal_records counter.\n\nm_wal_buffers_full also doesn't need to be checked because it's incremented only when at least one WAL record is generated (i.e., wal_records counter is incremented). But for safely, we assert that m_wal_buffers_full is always zero when no WAL record is generated\n\nThis function can be called by a process like walwriter that normally generates no WAL records. To determine whether any WAL activity has happened at that process since the last time, the numbers of WAL writes and syncs are also checked.\n------------------------\n\n+ * The accumulated counters for buffer usage.\n+ *\n+ * The reason the counters are accumulated values is to avoid unexpected\n+ * reset because many callers may use them.\n\nAren't the following comments better?\n\n------------------------\nThese counters keep being incremented infinitely, i.e., must never be reset to zero, so that we can calculate how much the counters are incremented in an arbitrary period.\n------------------------\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 17 May 2021 22:34:33 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "On 2021/05/17 22:34, Fujii Masao wrote:\n> \n> \n> On 2021/05/17 16:39, Masahiro Ikeda wrote:\n>>\n>> Thanks for your comments!\n>>\n>>>> +     * is executed, wal records aren't nomally generated (although HOT makes\n>>>\n>>> nomally -> normally?\n>>\n>> Yes, fixed.\n>>\n>>>> +     * It's not enough to check the number of generated wal records, for\n>>>> +     * example the walwriter may write/sync the WAL although it doesn't\n>>>\n>>> You use both 'wal' and 'WAL' in the comments, but are there any intension?\n>>\n>> No, I changed to use 'WAL' only. Although some other comments have 'wal' and\n>> 'WAL', it seems that 'WAL' is often used.\n> \n> Thanks for updating the patch!\n\nThanks a lot of comments!\n\n> + * Buffer and generated WAL usage counters.\n> + *\n> + * The counters are accumulated values. There are infrastructures\n> + * to add the values and calculate the difference within a specific period.\n> \n> Is it really worth adding these comments here?\n\nBufferUsage has almost same comments. So, I removed it.\n\n> +     * Note: regarding to WAL statistics counters, it's not enough to check\n> +     * only whether the WAL record is generated or not. If a read transaction\n> +     * is executed, WAL records aren't normally generated (although HOT makes\n> +     * WAL records). But, just writes and syncs the WAL data may happen when\n> +     * to flush buffers.\n> \n> Aren't the following comments better?\n> \n> ------------------------------\n> To determine whether any WAL activity has occurred since last time, not only\n> the number of generated WAL records but also the numbers of WAL writes and\n> syncs need to be checked. Because even transaction that generates no WAL\n> records can write or sync WAL data when flushing the data pages.\n> ------------------------------\n\nThanks. Yes, I fixed it.\n\n> -     * This function can be called even if nothing at all has happened. In\n> -     * this case, avoid sending a completely empty message to the stats\n> -     * collector.\n> \n> I think that it's better to leave this comment as it is.\n\nOK. I leave it.\n\n> +     * First, to check the WAL stats counters were updated.\n> +     *\n> +     * Even if the 'force' is true, we don't need to send the stats if the\n> +     * counters were not updated.\n> +     *\n> +     * We can know whether the counters were updated or not to check only\n> +     * three counters. So, for performance, we don't allocate another memory\n> +     * spaces and check the all stats like pgstat_send_slru().\n> \n> Is it really worth adding these comments here?\n\nI removed them because the following comments are enough.\n\n* This function can be called even if nothing at all has happened. In\n* this case, avoid sending a completely empty message to the stats\n* collector.\n\n> +     * The current 'wal_records' is the same as the previous one means that\n> +     * WAL records weren't generated. So, the counters of 'wal_fpi',\n> +     * 'wal_bytes', 'm_wal_buffers_full' are not updated neither.\n> +     *\n> +     * It's not enough to check the number of generated WAL records, for\n> +     * example the walwriter may write/sync the WAL although it doesn't\n> +     * generate WAL records. 'm_wal_write' and 'm_wal_sync' are zero means the\n> +     * counters of time spent are zero too.\n> \n> Aren't the following comments better?\n> \n> ------------------------\n> Check wal_records counter to determine whether any WAL activity has happened\n> since last time. Note that other WalUsage counters don't need to be checked\n> because they are incremented always together with wal_records counter.\n> \n> m_wal_buffers_full also doesn't need to be checked because it's incremented\n> only when at least one WAL record is generated (i.e., wal_records counter is\n> incremented). But for safely, we assert that m_wal_buffers_full is always zero\n> when no WAL record is generated\n> \n> This function can be called by a process like walwriter that normally\n> generates no WAL records. To determine whether any WAL activity has happened\n> at that process since the last time, the numbers of WAL writes and syncs are\n> also checked.\n> ------------------------\n\nYes, I modified them.\n\n> + * The accumulated counters for buffer usage.\n> + *\n> + * The reason the counters are accumulated values is to avoid unexpected\n> + * reset because many callers may use them.\n> \n> Aren't the following comments better?\n> \n> ------------------------\n> These counters keep being incremented infinitely, i.e., must never be reset to\n> zero, so that we can calculate how much the counters are incremented in an\n> arbitrary period.\n> ------------------------\n\nYes, thanks.\nI fixed it.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Tue, 18 May 2021 09:57:18 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "\n\nOn 2021/05/18 9:57, Masahiro Ikeda wrote:\n> \n> \n> On 2021/05/17 22:34, Fujii Masao wrote:\n>>\n>>\n>> On 2021/05/17 16:39, Masahiro Ikeda wrote:\n>>>\n>>> Thanks for your comments!\n>>>\n>>>>> +     * is executed, wal records aren't nomally generated (although HOT makes\n>>>>\n>>>> nomally -> normally?\n>>>\n>>> Yes, fixed.\n>>>\n>>>>> +     * It's not enough to check the number of generated wal records, for\n>>>>> +     * example the walwriter may write/sync the WAL although it doesn't\n>>>>\n>>>> You use both 'wal' and 'WAL' in the comments, but are there any intension?\n>>>\n>>> No, I changed to use 'WAL' only. Although some other comments have 'wal' and\n>>> 'WAL', it seems that 'WAL' is often used.\n>>\n>> Thanks for updating the patch!\n> \n> Thanks a lot of comments!\n> \n>> + * Buffer and generated WAL usage counters.\n>> + *\n>> + * The counters are accumulated values. There are infrastructures\n>> + * to add the values and calculate the difference within a specific period.\n>>\n>> Is it really worth adding these comments here?\n> \n> BufferUsage has almost same comments. So, I removed it.\n> \n>> +     * Note: regarding to WAL statistics counters, it's not enough to check\n>> +     * only whether the WAL record is generated or not. If a read transaction\n>> +     * is executed, WAL records aren't normally generated (although HOT makes\n>> +     * WAL records). But, just writes and syncs the WAL data may happen when\n>> +     * to flush buffers.\n>>\n>> Aren't the following comments better?\n>>\n>> ------------------------------\n>> To determine whether any WAL activity has occurred since last time, not only\n>> the number of generated WAL records but also the numbers of WAL writes and\n>> syncs need to be checked. Because even transaction that generates no WAL\n>> records can write or sync WAL data when flushing the data pages.\n>> ------------------------------\n> \n> Thanks. Yes, I fixed it.\n> \n>> -     * This function can be called even if nothing at all has happened. In\n>> -     * this case, avoid sending a completely empty message to the stats\n>> -     * collector.\n>>\n>> I think that it's better to leave this comment as it is.\n> \n> OK. I leave it.\n> \n>> +     * First, to check the WAL stats counters were updated.\n>> +     *\n>> +     * Even if the 'force' is true, we don't need to send the stats if the\n>> +     * counters were not updated.\n>> +     *\n>> +     * We can know whether the counters were updated or not to check only\n>> +     * three counters. So, for performance, we don't allocate another memory\n>> +     * spaces and check the all stats like pgstat_send_slru().\n>>\n>> Is it really worth adding these comments here?\n> \n> I removed them because the following comments are enough.\n> \n> * This function can be called even if nothing at all has happened. In\n> * this case, avoid sending a completely empty message to the stats\n> * collector.\n> \n>> +     * The current 'wal_records' is the same as the previous one means that\n>> +     * WAL records weren't generated. So, the counters of 'wal_fpi',\n>> +     * 'wal_bytes', 'm_wal_buffers_full' are not updated neither.\n>> +     *\n>> +     * It's not enough to check the number of generated WAL records, for\n>> +     * example the walwriter may write/sync the WAL although it doesn't\n>> +     * generate WAL records. 'm_wal_write' and 'm_wal_sync' are zero means the\n>> +     * counters of time spent are zero too.\n>>\n>> Aren't the following comments better?\n>>\n>> ------------------------\n>> Check wal_records counter to determine whether any WAL activity has happened\n>> since last time. Note that other WalUsage counters don't need to be checked\n>> because they are incremented always together with wal_records counter.\n>>\n>> m_wal_buffers_full also doesn't need to be checked because it's incremented\n>> only when at least one WAL record is generated (i.e., wal_records counter is\n>> incremented). But for safely, we assert that m_wal_buffers_full is always zero\n>> when no WAL record is generated\n>>\n>> This function can be called by a process like walwriter that normally\n>> generates no WAL records. To determine whether any WAL activity has happened\n>> at that process since the last time, the numbers of WAL writes and syncs are\n>> also checked.\n>> ------------------------\n> \n> Yes, I modified them.\n> \n>> + * The accumulated counters for buffer usage.\n>> + *\n>> + * The reason the counters are accumulated values is to avoid unexpected\n>> + * reset because many callers may use them.\n>>\n>> Aren't the following comments better?\n>>\n>> ------------------------\n>> These counters keep being incremented infinitely, i.e., must never be reset to\n>> zero, so that we can calculate how much the counters are incremented in an\n>> arbitrary period.\n>> ------------------------\n> \n> Yes, thanks.\n> I fixed it.\n\nThanks for updating the patch! I modified some comments slightly and\npushed that version of the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 19 May 2021 11:40:52 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "\nOn 2021/05/19 11:40, Fujii Masao wrote:\n> Thanks for updating the patch! I modified some comments slightly and\n> pushed that version of the patch.\n\nThanks a lot!\n\nRegards,\n\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 20 May 2021 09:40:59 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: wal stats questions" } ]
[ { "msg_contents": "Hi all,\n\nAs $subject says, I noticed that while scanning the area. Any\nobjections to make all that more consistent with the style of HEAD?\nPlease see the attached.\n--\nMichael", "msg_date": "Fri, 23 Apr 2021 13:54:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Forgot some LSN_FORMAT_ARGS() in xlogreader.c" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> As $subject says, I noticed that while scanning the area. Any\n> objections to make all that more consistent with the style of HEAD?\n> Please see the attached.\n\n+1, it's not surprising some places didn't get that memo yet.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Apr 2021 01:03:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Forgot some LSN_FORMAT_ARGS() in xlogreader.c" }, { "msg_contents": "At Fri, 23 Apr 2021 13:54:15 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> Hi all,\n> \n> As $subject says, I noticed that while scanning the area. Any\n> objections to make all that more consistent with the style of HEAD?\n> Please see the attached.\n\nAFAICS it fixes the all remaining LSN parameters.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 23 Apr 2021 14:18:10 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Forgot some LSN_FORMAT_ARGS() in xlogreader.c" }, { "msg_contents": "On Fri, Apr 23, 2021 at 02:18:10PM +0900, Kyotaro Horiguchi wrote:\n> AFAICS it fixes the all remaining LSN parameters.\n\nThanks for double-checking. I was not sure if I got all of them or\nnot. Applied that now as of 4aba61b.\n--\nMichael", "msg_date": "Sat, 24 Apr 2021 09:33:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Forgot some LSN_FORMAT_ARGS() in xlogreader.c" } ]
[ { "msg_contents": "Hi,\n\nWe have automated tests for many specific replication and recovery\nscenarios, but nothing that tests replay of a wide range of records.\nPeople working on recovery code often use installcheck (presumably\nalong with other custom tests) to exercise it, sometimes with\nwal_consistency_check enabled. So, why don't we automate that? Aside\nfrom exercising the WAL decoding machinery (which brought me here),\nthat'd hopefully provide some decent improvements in coverage of the\nvarious redo routines, many of which are not currently exercised at\nall.\n\nI'm not quite sure where it belongs, though. The attached initial\nsketch patch puts it under rc/test/recovery near other similar things,\nbut I'm not sure if it's really OK to invoke make -C ../regress from\nhere. I copied pg_update/test.sh's trick of using a different\noutputdir to avoid clobbering a concurrent run under src/test/regress,\nand I also needed to invent a way to stop it from running the cursed\ntablespace test (deferring startup of the standby also works but eats\nway too much space, which I learned after blowing out a smallish\ndevelopment VM's disk). Alternatively, we could put it under\nsrc/test/regress, which makes some kind of logical sense, but it would\nmake a quick \"make check\" take more than twice as long. Perhaps it\ncould use a different target, one that check-world somehow reaches but\nnot check, and that also doesn't seem great. Ideas on how to\nstructure this or improve the perl welcome.", "msg_date": "Fri, 23 Apr 2021 17:37:48 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "A test for replay of regression tests" }, { "msg_contents": "From: Thomas Munro <thomas.munro@gmail.com>\r\n> We have automated tests for many specific replication and recovery scenarios,\r\n> but nothing that tests replay of a wide range of records.\r\n> People working on recovery code often use installcheck (presumably along\r\n> with other custom tests) to exercise it, sometimes with\r\n> wal_consistency_check enabled. So, why don't we automate that? Aside\r\n> from exercising the WAL decoding machinery (which brought me here), that'd\r\n> hopefully provide some decent improvements in coverage of the various redo\r\n> routines, many of which are not currently exercised at all.\r\n> \r\n> I'm not quite sure where it belongs, though. The attached initial sketch patch\r\n\r\nI think that's a good attempt. It'd be better to confirm that the database contents are identical on the primary and standby. But... I remember when I ran make installcheck with a standby connected, then ran pg_dumpall on both the primary and standby and compare the two output files, about 40 lines of difference were observed. Those differences were all about the sequence values. I didn't think about whether I should/can remove the differences.\r\n\r\n\r\n+# XXX The tablespace tests don't currently work when the standby shares a\r\n+# filesystem with the primary due to colliding absolute paths. We'll skip\r\n+# that for now.\r\n\r\nMaybe we had better have a hidden feature that creates tablespace contents in\r\n\r\n/tablespace_location/PG_..._<some_name>/\r\n\r\n<some_name> is the value of cluster_name or application_name.\r\n\r\nOr, we may provide a visible feature that allows users to store tablespace contents in a specified directory regardless of the LOCATION value in CREATE TABLESPACE. For instance, add a GUC like\r\n\r\n table_space_directory = '/some_dir'\r\n\r\nThen, the tablespace contents are stored in /some_dir/<tablespace_name>/. This may be useful if a DBaaS provider wants to offer some tablespace-based feature, say tablespace transparent data encryption, but doesn't want users to specify filesystem directories.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n", "msg_date": "Fri, 23 Apr 2021 06:27:25 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: A test for replay of regression tests" }, { "msg_contents": "On Fri, Apr 23, 2021 at 6:27 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n> From: Thomas Munro <thomas.munro@gmail.com>\n> > I'm not quite sure where it belongs, though. The attached initial sketch patch\n>\n> I think that's a good attempt. It'd be better to confirm that the database contents are identical on the primary and standby. But... I remember when I ran make installcheck with a standby connected, then ran pg_dumpall on both the primary and standby and compare the two output files, about 40 lines of difference were observed. Those differences were all about the sequence values. I didn't think about whether I should/can remove the differences.\n\nInteresting idea. I hadn't noticed the thing with sequences before.\n\n> +# XXX The tablespace tests don't currently work when the standby shares a\n> +# filesystem with the primary due to colliding absolute paths. We'll skip\n> +# that for now.\n>\n> Maybe we had better have a hidden feature that creates tablespace contents in\n>\n> /tablespace_location/PG_..._<some_name>/\n>\n> <some_name> is the value of cluster_name or application_name.\n>\n> Or, we may provide a visible feature that allows users to store tablespace contents in a specified directory regardless of the LOCATION value in CREATE TABLESPACE. For instance, add a GUC like\n>\n> table_space_directory = '/some_dir'\n>\n> Then, the tablespace contents are stored in /some_dir/<tablespace_name>/. This may be useful if a DBaaS provider wants to offer some tablespace-based feature, say tablespace transparent data encryption, but doesn't want users to specify filesystem directories.\n\nYeah, a few similar ideas have been put forward before, for example in\nthis thread:\n\nhttps://www.postgresql.org/message-id/flat/CALfoeisEF92F5nJ-aAcuWTvF_Aogxq_1bHLem_kVfM_tHc2mfg%40mail.gmail.com\n\n... but also others. I would definitely like to fix that problem too\n(having worked on several things that interact with tablespaces, I\ndefinitely want to be able to extend those tests in future patches,\nand get coverage on the build farm and CI), but with --skip-tests that\ncould be done independently.\n\nApparently the invocation of make failed for some reason on CI (not\nsure why, works for me), but in any case, that feels a bit clunky and\nmight not ever work on Windows, so perhaps we should figure out how to\ninvoke the pg_regress[.exe] program directly.\n\n\n", "msg_date": "Fri, 23 Apr 2021 19:51:11 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "Hi,\n\nOn 2021-04-23 17:37:48 +1200, Thomas Munro wrote:\n> We have automated tests for many specific replication and recovery\n> scenarios, but nothing that tests replay of a wide range of records.\n> People working on recovery code often use installcheck (presumably\n> along with other custom tests) to exercise it, sometimes with\n> wal_consistency_check enabled. So, why don't we automate that? Aside\n> from exercising the WAL decoding machinery (which brought me here),\n> that'd hopefully provide some decent improvements in coverage of the\n> various redo routines, many of which are not currently exercised at\n> all.\n\nYay.\n\n\n> I'm not quite sure where it belongs, though. The attached initial\n> sketch patch puts it under rc/test/recovery near other similar things,\n> but I'm not sure if it's really OK to invoke make -C ../regress from\n> here.\n\nI'd say it's not ok, and we should just invoke pg_regress without make.\n\n\n> Add a new TAP test under src/test/recovery that runs the regression\n> tests with wal_consistency_checking=all.\n\nHm. I wonder if running with wal_consistency_checking=all doesn't also\nreduce coverage of some aspects of recovery, by increasing record sizes\netc.\n\n\n> I copied pg_update/test.sh's trick of using a different\n> outputdir to avoid clobbering a concurrent run under src/test/regress,\n> and I also needed to invent a way to stop it from running the cursed\n> tablespace test (deferring startup of the standby also works but eats\n> way too much space, which I learned after blowing out a smallish\n> development VM's disk).\n\nThat's because you are using wal_consistency_checking=all, right?\nBecause IIRC we don't generate that much WAL otherwise?\n\n\n> +# Create some content on primary and check its presence in standby 1\n> +$node_primary->safe_psql('postgres',\n> +\t\"CREATE TABLE tab_int AS SELECT generate_series(1,1002) AS a\");\n> +\n> +# Wait for standby to catch up\n> +$node_primary->wait_for_catchup($node_standby_1, 'replay',\n> +\t$node_primary->lsn('insert'));\n\n> +my $result =\n> + $node_standby_1->safe_psql('postgres', \"SELECT count(*) FROM tab_int\");\n> +print \"standby 1: $result\\n\";\n> +is($result, qq(1002), 'check streamed content on standby 1');\n\nWhy is this needed?\n\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 23 Apr 2021 08:20:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "\nOn 4/23/21 1:37 AM, Thomas Munro wrote:\n> Hi,\n>\n> We have automated tests for many specific replication and recovery\n> scenarios, but nothing that tests replay of a wide range of records.\n> People working on recovery code often use installcheck (presumably\n> along with other custom tests) to exercise it, sometimes with\n> wal_consistency_check enabled. So, why don't we automate that? Aside\n> from exercising the WAL decoding machinery (which brought me here),\n> that'd hopefully provide some decent improvements in coverage of the\n> various redo routines, many of which are not currently exercised at\n> all.\n>\n> I'm not quite sure where it belongs, though. The attached initial\n> sketch patch puts it under rc/test/recovery near other similar things,\n> but I'm not sure if it's really OK to invoke make -C ../regress from\n> here. I copied pg_update/test.sh's trick of using a different\n> outputdir to avoid clobbering a concurrent run under src/test/regress,\n> and I also needed to invent a way to stop it from running the cursed\n> tablespace test (deferring startup of the standby also works but eats\n> way too much space, which I learned after blowing out a smallish\n> development VM's disk). Alternatively, we could put it under\n> src/test/regress, which makes some kind of logical sense, but it would\n> make a quick \"make check\" take more than twice as long. Perhaps it\n> could use a different target, one that check-world somehow reaches but\n> not check, and that also doesn't seem great. Ideas on how to\n> structure this or improve the perl welcome.\n\n\n\nNice, I like adding a skip-tests option to pg_regress.\n\nThe perl looks ok - I assume those\n\n    print \"standby 1: $result\\n\";  \n\nlines are there for debugging. Normally you would just process $result\nusing the Test::More functions\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 23 Apr 2021 11:38:39 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-04-23 17:37:48 +1200, Thomas Munro wrote:\n>> We have automated tests for many specific replication and recovery\n>> scenarios, but nothing that tests replay of a wide range of records.\n\n> Yay.\n\n+1\n\n>> Add a new TAP test under src/test/recovery that runs the regression\n>> tests with wal_consistency_checking=all.\n\n> Hm. I wonder if running with wal_consistency_checking=all doesn't also\n> reduce coverage of some aspects of recovery, by increasing record sizes\n> etc.\n\nYeah. I found out earlier that wal_consistency_checking=all is an\nabsolute PIG. Running the regression tests that way requires tens of\ngigabytes of disk space, and a significant amount of time if your\ndisk is not very speedy. If we put this into the buildfarm at all,\nit would have to be opt-in, not run-by-default, because a lot of BF\nanimals simply don't have the horsepower. I think I'd vote against\nadding it to check-world, too; the cost/benefit ratio is not good\nunless you are specifically working on replay functions.\n\nAnd as you say, it alters the behavior enough to make it a little\nquestionable whether we're exercising the \"normal\" code paths.\n\nThe two things I'd say about this are (1) Whether to use\nwal_consistency_checking, and with what value, needs to be\neasily adjustable. (2) People will want to run other test suites\nthan the core regression tests, eg contrib modules.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Apr 2021 11:53:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "Hi,\n\nOn 2021-04-23 11:53:35 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Hm. I wonder if running with wal_consistency_checking=all doesn't also\n> > reduce coverage of some aspects of recovery, by increasing record sizes\n> > etc.\n> \n> Yeah. I found out earlier that wal_consistency_checking=all is an\n> absolute PIG. Running the regression tests that way requires tens of\n> gigabytes of disk space, and a significant amount of time if your\n> disk is not very speedy. If we put this into the buildfarm at all,\n> it would have to be opt-in, not run-by-default, because a lot of BF\n> animals simply don't have the horsepower. I think I'd vote against\n> adding it to check-world, too; the cost/benefit ratio is not good\n> unless you are specifically working on replay functions.\n\nI think it'd be a huge improvement to test recovery during check-world\nby default - it's a huge swath of crucial code that practically has no\ntest coverage. I agree that testing by default with\nwal_consistency_checking=all isn't feasible due to the time & space\noverhead, so I think we should not do that.\n\n\n> The two things I'd say about this are (1) Whether to use\n> wal_consistency_checking, and with what value, needs to be\n> easily adjustable. (2) People will want to run other test suites\n> than the core regression tests, eg contrib modules.\n\nI'm not really excited about tackling 2) initially. I think it's a\nsignificant issue that we don't have test coverage for most redo\nroutines and that we should change that with some urgency - even if we\nback out these WAL prefetch related changes, there've been sufficiently\nmany changes affecting WAL that I think it's worth doing the minimal\nthing soon.\n\nI don't think there's actually that much need to test contrib modules\nthrough recovery - most of them don't seem like they'd add meaningful\ncoverage? The exception is contrib/bloom, but perhaps that'd be better\ntackled with a dedicated test?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 23 Apr 2021 10:04:52 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-04-23 11:53:35 -0400, Tom Lane wrote:\n>> Yeah. I found out earlier that wal_consistency_checking=all is an\n>> absolute PIG. Running the regression tests that way requires tens of\n>> gigabytes of disk space, and a significant amount of time if your\n>> disk is not very speedy. If we put this into the buildfarm at all,\n>> it would have to be opt-in, not run-by-default, because a lot of BF\n>> animals simply don't have the horsepower. I think I'd vote against\n>> adding it to check-world, too; the cost/benefit ratio is not good\n>> unless you are specifically working on replay functions.\n\n> I think it'd be a huge improvement to test recovery during check-world\n> by default - it's a huge swath of crucial code that practically has no\n> test coverage. I agree that testing by default with\n> wal_consistency_checking=all isn't feasible due to the time & space\n> overhead, so I think we should not do that.\n\nI was mainly objecting to enabling wal_consistency_checking by default.\nI agree it's bad that we have so little routine test coverage on WAL\nreplay code.\n\n>> The two things I'd say about this are (1) Whether to use\n>> wal_consistency_checking, and with what value, needs to be\n>> easily adjustable. (2) People will want to run other test suites\n>> than the core regression tests, eg contrib modules.\n\n> I don't think there's actually that much need to test contrib modules\n> through recovery - most of them don't seem like they'd add meaningful\n> coverage? The exception is contrib/bloom, but perhaps that'd be better\n> tackled with a dedicated test?\n\ncontrib/bloom is indeed the only(?) case within contrib, but in my mind\nthat's a proxy for what people will be needing to test out-of-core AMs.\nIt might not be a test to run by default, but there needs to be a way\nto do it.\n\nAlso, I suspect that there are bits of GIST/GIN/SPGIST that are not\nwell exercised if you don't run the contrib modules that add opclasses\nfor those.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Apr 2021 13:13:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "Hi,\n\nOn 2021-04-23 13:13:15 -0400, Tom Lane wrote:\n> contrib/bloom is indeed the only(?) case within contrib, but in my mind\n> that's a proxy for what people will be needing to test out-of-core AMs.\n> It might not be a test to run by default, but there needs to be a way\n> to do it.\n\nHm. My experience in the past was that the best way to test an external\nAM is to run the core regression tests with a different\ndefault_table_access_method. That does require some work of ensuring the\nAM is installed and the relevant extension created, which in turn\nrequires a different test schedule, or hacking template1. So likely a\ndifferent test script anyway?\n\n\n> Also, I suspect that there are bits of GIST/GIN/SPGIST that are not\n> well exercised if you don't run the contrib modules that add opclasses\n> for those.\n\nPossible - still think it'd be best to get the minimal thing in asap,\nand then try to extend further afterwards... Perfect being the enemy of\ngood and all that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 23 Apr 2021 10:22:49 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "Ok, here's a new version incorporating feedback so far.\n\n1. Invoke pg_regress directly (no make).\n\n2. Use PG_TEST_EXTRA=\"wal_consistency_checking\" as a way to opt in to\nthe more expensive test.\n\n3. Use parallel schedule rather than serial. It's faster but also\nthe non-determinism might discover more things. This required\nchanging the TAP test max_connections setting from 10 to 25.\n\n4. Remove some extraneous print statements and\ncheck-if-data-is-replicated-using-SELECT tests that are technically\nnot needed (I had copied those from 001_stream_rep.pl).", "msg_date": "Tue, 4 May 2021 23:12:17 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "вт, 8 июн. 2021 г. в 02:25, Thomas Munro <thomas.munro@gmail.com>:\n\n> Ok, here's a new version incorporating feedback so far.\n>\n> 1. Invoke pg_regress directly (no make).\n>\n> 2. Use PG_TEST_EXTRA=\"wal_consistency_checking\" as a way to opt in to\n> the more expensive test.\n>\n> 3. Use parallel schedule rather than serial. It's faster but also\n> the non-determinism might discover more things. This required\n> changing the TAP test max_connections setting from 10 to 25.\n>\n> 4. Remove some extraneous print statements and\n> check-if-data-is-replicated-using-SELECT tests that are technically\n> not needed (I had copied those from 001_stream_rep.pl).\n>\n\nThank you for working on this test set!\nI was especially glad to see the skip-tests option for pg_regress. I think\nit will become a very handy tool for hackers.\n\nTo try the patch I had to resolve a few merge conflicts, see a rebased\nversion in attachments.\n\n> auth_extra => [ '--create-role', 'repl_role' ]);\nThis line and the comment above it look like some copy-paste artifacts. Did\nI get it right? If so, I suggest removing them.\nOther than that, the patch looks good to me.\n\n-- \nBest regards,\nLubennikova Anastasia", "msg_date": "Tue, 8 Jun 2021 02:44:29 +0300", "msg_from": "Anastasia Lubennikova <lubennikovaav@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "вт, 8 июн. 2021 г. в 02:44, Anastasia Lubennikova <lubennikovaav@gmail.com>:\n\n>\n> вт, 8 июн. 2021 г. в 02:25, Thomas Munro <thomas.munro@gmail.com>:\n>\n>> Ok, here's a new version incorporating feedback so far.\n>>\n>> 1. Invoke pg_regress directly (no make).\n>>\n>> 2. Use PG_TEST_EXTRA=\"wal_consistency_checking\" as a way to opt in to\n>> the more expensive test.\n>>\n>> 3. Use parallel schedule rather than serial. It's faster but also\n>> the non-determinism might discover more things. This required\n>> changing the TAP test max_connections setting from 10 to 25.\n>>\n>> 4. Remove some extraneous print statements and\n>> check-if-data-is-replicated-using-SELECT tests that are technically\n>> not needed (I had copied those from 001_stream_rep.pl).\n>>\n>\n> Thank you for working on this test set!\n> I was especially glad to see the skip-tests option for pg_regress. I think\n> it will become a very handy tool for hackers.\n>\n> To try the patch I had to resolve a few merge conflicts, see a rebased\n> version in attachments.\n>\n> > auth_extra => [ '--create-role', 'repl_role' ]);\n> This line and the comment above it look like some copy-paste artifacts.\n> Did I get it right? If so, I suggest removing them.\n> Other than that, the patch looks good to me.\n>\n\nFor some reason, it failed on cfbot, so I'll switch it back to 'Waiting on\nauthor'.\nBTW, do I get it right, that cfbot CI will need some adjustments to print\nregression.out for this test?\n\nSee one more revision of the patch attached. It contains the following\nchanges:\n- rebase to recent main\n- removed 'auth_extra' piece, that I mentioned above.\n- added lacking make clean and gitignore changes.\n\n-- \nBest regards,\nLubennikova Anastasia", "msg_date": "Thu, 10 Jun 2021 10:37:27 +0300", "msg_from": "Anastasia Lubennikova <lubennikovaav@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Thu, Jun 10, 2021 at 7:37 PM Anastasia Lubennikova\n<lubennikovaav@gmail.com> wrote:\n> вт, 8 июн. 2021 г. в 02:44, Anastasia Lubennikova <lubennikovaav@gmail.com>:\n>> Thank you for working on this test set!\n>> I was especially glad to see the skip-tests option for pg_regress. I think it will become a very handy tool for hackers.\n>>\n>> To try the patch I had to resolve a few merge conflicts, see a rebased version in attachments.\n>>\n>> > auth_extra => [ '--create-role', 'repl_role' ]);\n>> This line and the comment above it look like some copy-paste artifacts. Did I get it right? If so, I suggest removing them.\n>> Other than that, the patch looks good to me.\n>\n> For some reason, it failed on cfbot, so I'll switch it back to 'Waiting on author'.\n> BTW, do I get it right, that cfbot CI will need some adjustments to print regression.out for this test?\n>\n> See one more revision of the patch attached. It contains the following changes:\n> - rebase to recent main\n> - removed 'auth_extra' piece, that I mentioned above.\n> - added lacking make clean and gitignore changes.\n\nThanks! Yeah, there does seem to be a mysterious CI failure there,\nnot reproducible locally for me. You're right that it's not dumping\nenough information to diagnose the problem... I will look into it\ntomorrow.\n\n\n", "msg_date": "Thu, 10 Jun 2021 19:47:22 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Thu, Jun 10, 2021 at 7:47 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Jun 10, 2021 at 7:37 PM Anastasia Lubennikova\n> <lubennikovaav@gmail.com> wrote:\n> > For some reason, it failed on cfbot, so I'll switch it back to 'Waiting on\n\nSorry for the delay. I got stuck in a CI loop trying to make a new\nimproved version work on Windows, so far without success. If anyone\ncan tell me what's wrong on that OS I'd be very grateful to hear it.\nI don't know if it's something I haven't understood about reparse\npoints, or something to do with \"restricted tokens\" and accounts\nprivileges. The symptoms on Cirrus are:\n\n DROP TABLESPACE regress_tblspacewith;\n +WARNING: could not open directory\n\"pg_tblspc/16386/PG_15_202109101\": No such file or directory\n +WARNING: could not stat file \"pg_tblspc/16386\": No such file or directory\n\nThe short explanation of this new version is that the tablespace tests\nnow work on a pair of local nodes because you can do this sort of\nthing:\n\npostgres=# create tablespace xxx location 'pg_user_files/xxx';\nERROR: directory \"pg_user_files/xxx\" does not exist\npostgres=# create tablespace xxx new location 'pg_user_files/xxx';\nCREATE TABLESPACE\n\nPatches:\n\n0001: Allow restricted relative tablespace paths.\n\nRationale: I really want to be able to run the tablespace test with a\nlocal replica, instead of just skipping it (including but not only\nfrom this new TAP test). After re-reading a bunch of threads about\nhow to achieve that that never went anywhere and considering various\nideas already presented, I wondered if we could agree on allowing\nrelative paths under one specific directory \"pg_user_files\" (a\ndirectory that PostgreSQL itself will completely ignore). Better\nnames welcome.\n\nI wonder if for Windows we'd want to switch to real symlinks, since,\nas far as I know from some light reading, reparse points are converted\nto absolute paths on creation, so the pgdata directory would be less\nportable than it would be on POSIX systems.\n\n0002: CREATE TABLESPACE ... NEW LOCATION.\n\nThe new syntax \"NEW\" says that it's OK if the directory doesn't exist\nyet, we'll just create it.\n\nRationale: With relative paths, it's tricky for pg_regress to find\nthe data directory of the primary server + any streaming replicas that\nmay be downstream from it (and possibly remote) to create the\ndirectory, but the server can do it easily. Better syntax welcome.\n(I originally wanted to use WITH (<something>) but that syntax is\ntangled up with persistent relopts.)\n\n0003: Use relative paths for tablespace regression test.\n\nRemove the pg_regress logic for creating the directory, and switch to\nrelative paths using the above.\n\n0004: Test replay of regression tests.\n\nSame as before, this adds a replicated run of the regression tests in\nsrc/test/recovery/t/027_stream_regress.pl, with an optional expensive\nmode that you can enable with\nPG_TEST_EXTRA=\"wal_consistency_checking\".\n\nI removed the useless --create-role as pointed out by Anastasia.\n\nI added a step to compare the contents of the primary and replica\nservers with pg_dump, as suggested by Tsunakawa-san.\n\nI think the way I pass in the psql source directory to --bindir is not\ngood, but I've reached my daily limit of Perl; how should I be\nspecifying the tmp_install bin directory here? This is so pg_regress\ncan find psql.\n\nsystem_or_bail(\"../regress/pg_regress\",\n \"--bindir=../../bin/psql\",\n \"--port=\" . $node_primary->port,\n \"--schedule=../regress/parallel_schedule\",\n \"--dlpath=../regress\",\n \"--inputdir=../regress\");", "msg_date": "Wed, 6 Oct 2021 19:10:12 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Wed, Oct 6, 2021 at 7:10 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I wonder if for Windows we'd want to switch to real symlinks, since,\n> as far as I know from some light reading, reparse points are converted\n> to absolute paths on creation, so the pgdata directory would be less\n> portable than it would be on POSIX systems.\n\nI looked into that a bit, and it seems that the newer \"real\" symlinks\ncan't be created without admin privileges, unlike junction points, so\nthat wouldn't help. I still need to figure out what this\njunction-based patch set is doing wrong on Windows though trial-by-CI.\nIn the meantime, here is a rebase over recent changes to the Perl\ntesting APIs.", "msg_date": "Tue, 23 Nov 2021 22:07:50 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "\nOn 11/23/21 04:07, Thomas Munro wrote:\n> On Wed, Oct 6, 2021 at 7:10 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> I wonder if for Windows we'd want to switch to real symlinks, since,\n>> as far as I know from some light reading, reparse points are converted\n>> to absolute paths on creation, so the pgdata directory would be less\n>> portable than it would be on POSIX systems.\n> I looked into that a bit, and it seems that the newer \"real\" symlinks\n> can't be created without admin privileges, unlike junction points, so\n> that wouldn't help. I still need to figure out what this\n> junction-based patch set is doing wrong on Windows though trial-by-CI.\n> In the meantime, here is a rebase over recent changes to the Perl\n> testing APIs.\n\n\nMore exactly you need to \"Create Symbolic Links\" privilege. see\n<https://github.com/git-for-windows/git/wiki/Symbolic-Links>\n\n\nI haven't yet worked out a way of granting that from the command line,\nbut if it's working the buildfarm client (as of git tip) will use it on\nwindows for the workdirs space saving feature.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 23 Nov 2021 10:47:09 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "\nOn 11/23/21 10:47, Andrew Dunstan wrote:\n> On 11/23/21 04:07, Thomas Munro wrote:\n>> On Wed, Oct 6, 2021 at 7:10 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>>> I wonder if for Windows we'd want to switch to real symlinks, since,\n>>> as far as I know from some light reading, reparse points are converted\n>>> to absolute paths on creation, so the pgdata directory would be less\n>>> portable than it would be on POSIX systems.\n>> I looked into that a bit, and it seems that the newer \"real\" symlinks\n>> can't be created without admin privileges, unlike junction points, so\n>> that wouldn't help. I still need to figure out what this\n>> junction-based patch set is doing wrong on Windows though trial-by-CI.\n>> In the meantime, here is a rebase over recent changes to the Perl\n>> testing APIs.\n>\n> More exactly you need to \"Create Symbolic Links\" privilege. see\n> <https://github.com/git-for-windows/git/wiki/Symbolic-Links>\n>\n>\n> I haven't yet worked out a way of granting that from the command line,\n> but if it's working the buildfarm client (as of git tip) will use it on\n> windows for the workdirs space saving feature.\n\n\nUpdate:\n\nThere is a PowerShell module called Carbon which provides this and a\nwhole lot more. It can be installed in numerous ways, including via\nChocolatey. Here's what I am using:\n\n choco install -y Carbon\n Import-Module Carbon\n Grant-CPrivilege -Identity myuser -Privilege SeCreateSymbolicLinkPrivilege\n\nSee <https://get-carbon.org/Grant-Privilege.html> The command name I\nused above is now the preferred spelling, although that's not reflected\non the manual page.\n\n\ncheers\n\n\nandrew\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 29 Nov 2021 09:14:24 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Tue, Nov 30, 2021 at 3:14 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> choco install -y Carbon\n> Import-Module Carbon\n> Grant-CPrivilege -Identity myuser -Privilege SeCreateSymbolicLinkPrivilege\n\nInteresting. Well, I found the problem with my last patch (to wit:\njunction points must be absolute, unlike real symlinks, which I'd\nconsidered already but I missed that tmp_check's DataDir had a stray\ninternal \\.\\), and now I'm wondering whether these newer real symlinks\ncould help. The constraints are pretty hard to work with... I thought\nabout a couple of options:\n\n1. We could try to use real symlinks, and fall back to junction\npoints if that fails. That means that these new tests I'm proposing\nwould fail unless you grant that privilege or run in developer mode as\nyou were saying. It bothers me a bit that developers and the BF would\nbe testing a different code path than production databases run...\nunless you're thinking we should switch to symlinks with no fallback,\nand require that privilege to be granted by end users to production\nservers at least if they want to use tablespaces, and also drop\npre-Win10 support in the same release? That's bigger than I was\nthinking.\n\n2. We could convert relative paths to absolute paths at junction\npoint creation time, which I tried, and \"check\" now passes. Problems:\n(1) now you can't move pgdata around, (2) the is-the-path-too-long\ncheck performed on a primary is not sufficient to check if the\ntransformed absolute path will be too long on a replica.\n\nThe most conservative simple idea I have so far is: go with option 2,\nbut make this whole thing an undocumented developer-only mode, and\nturn it on in the regression tests. Here's a patch set like that.\nThoughts?\n\nAnother option would be to stop using operating system symlinks, and\nbuild the target paths ourselves; I didn't investigate that as it\nseemed like a bigger change than this warrants.\n\nNext problem: The below is clearly not the right way to find the\npg_regress binary and bindir, and doesn't work on Windows or VPATH.\nAny suggestions for how to do this? I feel like something like\n$node->installed_command() or similar logic is needed...\n\n# Run the regression tests against the primary.\n# XXX How should we find the pg_regress binary and bindir?\nsystem_or_bail(\"../regress/pg_regress\",\n \"--bindir=../../bin/psql\",\n \"--port=\" . $node_primary->port,\n \"--schedule=../regress/parallel_schedule\",\n \"--dlpath=../regress\",\n \"--inputdir=../regress\");\n\nBTW 0002 is one of those renaming patches from git that GNU patch\ndoesn't seem to apply correctly, sorry cfbot...", "msg_date": "Sat, 4 Dec 2021 17:21:08 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "\nOn 12/3/21 23:21, Thomas Munro wrote:\n>\n> Next problem: The below is clearly not the right way to find the\n> pg_regress binary and bindir, and doesn't work on Windows or VPATH.\n> Any suggestions for how to do this? I feel like something like\n> $node->installed_command() or similar logic is needed...\n>\n> # Run the regression tests against the primary.\n> # XXX How should we find the pg_regress binary and bindir?\n> system_or_bail(\"../regress/pg_regress\",\n> \"--bindir=../../bin/psql\",\n> \"--port=\" . $node_primary->port,\n> \"--schedule=../regress/parallel_schedule\",\n> \"--dlpath=../regress\",\n> \"--inputdir=../regress\");\n>\n\nTAP tests are passed a path to pg_regress as $ENV{PG_REGRESS}. You\nshould be using that. On non-MSVC, the path to a non-installed psql is\nin this case  \"$TESTDIR/../../bin/psql\" - this should work for VPATH\nbuilds as well as non-VPATH. On MSVC it's a bit harder - it's\n\"$top_builddir/$releasetype/psql\" but we don't expose that. Perhaps we\nshould. c.f. commit f4ce6c4d3a\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 4 Dec 2021 10:16:50 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Sun, Dec 5, 2021 at 4:16 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> TAP tests are passed a path to pg_regress as $ENV{PG_REGRESS}. You\n> should be using that. On non-MSVC, the path to a non-installed psql is\n> in this case \"$TESTDIR/../../bin/psql\" - this should work for VPATH\n> builds as well as non-VPATH. On MSVC it's a bit harder - it's\n> \"$top_builddir/$releasetype/psql\" but we don't expose that. Perhaps we\n> should. c.f. commit f4ce6c4d3a\n\nThanks, that helped. Here's a new version that passes on Windows,\nUnix and Unix with VPATH. I also had to figure out where the DLLs\nare, and make sure that various output files go to the build\ndirectory, not source directory, if different, which I did by passing\ndown another similar environment variable. Better ideas? (It\nconfused me for some time that make follows the symlink and runs the\nperl code from inside the source directory.)\n\nThis adds 2 whole minutes to the recovery check, when running with the\nWindows serial-only scripts on Cirrus CI (using Andres's CI patches).\nFor Linux it adds ~20 seconds to the total of -j8 check-world.\nHopefully that's time well spent, because it adds test coverage for\nall the redo routines, and hopefully soon we won't have to run 'em in\nseries on Windows.\n\nDoes anyone want to object to the concept of the \"pg_user_files\"\ndirectory or the developer-only GUC \"allow_relative_tablespaces\"?\nThere's room for discussion about names; maybe initdb shouldn't create\nthe directory unless you ask it to, or something.", "msg_date": "Thu, 9 Dec 2021 12:10:23 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "\nOn 12/8/21 18:10, Thomas Munro wrote:\n> On Sun, Dec 5, 2021 at 4:16 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> TAP tests are passed a path to pg_regress as $ENV{PG_REGRESS}. You\n>> should be using that. On non-MSVC, the path to a non-installed psql is\n>> in this case \"$TESTDIR/../../bin/psql\" - this should work for VPATH\n>> builds as well as non-VPATH. On MSVC it's a bit harder - it's\n>> \"$top_builddir/$releasetype/psql\" but we don't expose that. Perhaps we\n>> should. c.f. commit f4ce6c4d3a\n> Thanks, that helped. Here's a new version that passes on Windows,\n> Unix and Unix with VPATH. I also had to figure out where the DLLs\n> are, and make sure that various output files go to the build\n> directory, not source directory, if different, which I did by passing\n> down another similar environment variable. Better ideas? (It\n> confused me for some time that make follows the symlink and runs the\n> perl code from inside the source directory.)\n\n\nThe new version appears to set an empty --bindir for pg_regress. Is that\nright?\n\n\n> This adds 2 whole minutes to the recovery check, when running with the\n> Windows serial-only scripts on Cirrus CI (using Andres's CI patches).\n> For Linux it adds ~20 seconds to the total of -j8 check-world.\n> Hopefully that's time well spent, because it adds test coverage for\n> all the redo routines, and hopefully soon we won't have to run 'em in\n> series on Windows.\n>\n> Does anyone want to object to the concept of the \"pg_user_files\"\n> directory or the developer-only GUC \"allow_relative_tablespaces\"?\n> There's room for discussion about names; maybe initdb shouldn't create\n> the directory unless you ask it to, or something.\n\n\nI'm slightly worried that some bright spark will discover it and think\nit's a good idea for a production setup.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 9 Dec 2021 08:12:14 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "Hi,\n\nOn 2021-12-09 08:12:14 -0500, Andrew Dunstan wrote:\n> > Does anyone want to object to the concept of the \"pg_user_files\"\n> > directory or the developer-only GUC \"allow_relative_tablespaces\"?\n> > There's room for discussion about names; maybe initdb shouldn't create\n> > the directory unless you ask it to, or something.\n\nPersonally I'd rather put relative tablespaces into a dedicated directory or\njust into pg_tblspc, but without a symlink. Some tools need to understand\ntablespace layout etc, and having them in a directory that, by the name, will\nalso contain other things seems likely to cause confusion.\n\n\n> I'm slightly worried that some bright spark will discover it and think\n> it's a good idea for a production setup.\n\nIt'd not really be worse than the current situation of accidentally corrupting\na local replica or such :/.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 9 Dec 2021 11:38:29 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Fri, Dec 10, 2021 at 2:12 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> The new version appears to set an empty --bindir for pg_regress. Is that\n> right?\n\nIt seems to be necessary to find eg psql, since --bindir='' means\n\"expect $PATH to contain the installed binaries\", and that's working\non both build systems. The alternative would be to export yet another\nenvironment variable, $PG_INSTALL or such -- do you think that'd be\nbetter, or did I miss something that exists already like that?\n\n\n", "msg_date": "Fri, 10 Dec 2021 09:15:38 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Fri, Dec 10, 2021 at 8:38 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2021-12-09 08:12:14 -0500, Andrew Dunstan wrote:\n> > > Does anyone want to object to the concept of the \"pg_user_files\"\n> > > directory or the developer-only GUC \"allow_relative_tablespaces\"?\n> > > There's room for discussion about names; maybe initdb shouldn't create\n> > > the directory unless you ask it to, or something.\n>\n> Personally I'd rather put relative tablespaces into a dedicated directory or\n> just into pg_tblspc, but without a symlink. Some tools need to understand\n> tablespace layout etc, and having them in a directory that, by the name, will\n> also contain other things seems likely to cause confusion.\n\nAlright, let me try it that way... more soon.\n\n\n", "msg_date": "Fri, 10 Dec 2021 10:35:19 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "Hi,\n\nOn 2021-12-09 12:10:23 +1300, Thomas Munro wrote:\n> From a60ada37f3ff7d311d56fe453b2abeadf0b350e5 Mon Sep 17 00:00:00 2001\n> From: Thomas Munro <thomas.munro@gmail.com>\n> Date: Wed, 4 Aug 2021 22:17:54 +1200\n> Subject: [PATCH v8 2/5] Use relative paths for tablespace regression test.\n> \n> Remove the machinery from pg_regress that manages the testtablespace\n> directory. Instead, use a relative path.\n> \n> Discussion: https://postgr.es/m/CA%2BhUKGKpRWQ9SxdxxDmTBCJoR0YnFpMBe7kyzY8SUQk%2BHeskxg%40mail.gmail.com\n\nSeems like we ought to add a tiny tap test or such for this - otherwise we\nwon't have any coverage of \"normal\" tablespaces? I don't think they need to be\nexercised as part of the normal tests, but having some very basic testing\nin a tap test seems worthwhile.\n\n\n> diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm\n> index 9467a199c8..5cfa137cde 100644\n> --- a/src/test/perl/PostgreSQL/Test/Cluster.pm\n> +++ b/src/test/perl/PostgreSQL/Test/Cluster.pm\n> @@ -460,7 +460,7 @@ sub init\n> \t\tprint $conf \"hot_standby = on\\n\";\n> \t\t# conservative settings to ensure we can run multiple postmasters:\n> \t\tprint $conf \"shared_buffers = 1MB\\n\";\n> -\t\tprint $conf \"max_connections = 10\\n\";\n> +\t\tprint $conf \"max_connections = 25\\n\";\n> \t\t# limit disk space consumption, too:\n> \t\tprint $conf \"max_wal_size = 128MB\\n\";\n> \t}\n\nWhat's the relation of this to the rest?\n\n\n> +# Perform a logical dump of primary and standby, and check that they match\n> +command_ok(\n> +\t[ \"pg_dump\", '-f', $outputdir . '/primary.dump', '--no-sync',\n> +\t '-p', $node_primary->port, 'regression' ],\n> +\t\"dump primary server\");\n> +command_ok(\n> +\t[ \"pg_dump\", '-f', $outputdir . '/standby.dump', '--no-sync',\n> +\t '-p', $node_standby_1->port, 'regression' ],\n> +\t\"dump standby server\");\n> +command_ok(\n> +\t[ \"diff\", $outputdir . '/primary.dump', $outputdir . '/standby.dump' ],\n> +\t\"compare primary and standby dumps\");\n> +\n\nAbsurd nitpick: What's the deal with using \"\" for one part, and '' for the\nrest?\n\nSeparately: I think the case of seeing diffs will be too hard to debug like\nthis, as the difference isn't shown afaict?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 9 Dec 2021 13:36:05 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Fri, Dec 10, 2021 at 10:36 AM Andres Freund <andres@anarazel.de> wrote:\n> Seems like we ought to add a tiny tap test or such for this - otherwise we\n> won't have any coverage of \"normal\" tablespaces? I don't think they need to be\n> exercised as part of the normal tests, but having some very basic testing\n> in a tap test seems worthwhile.\n\nGood idea, that was bothering me too. Done.\n\n> > - print $conf \"max_connections = 10\\n\";\n> > + print $conf \"max_connections = 25\\n\";\n\n> What's the relation of this to the rest?\n\nSomeone decided that allow_streaming should imply max_connections =\n10, but we need ~20 to run the parallel regression test schedule.\nHowever, I can just as easily move that to a local adjustment in the\nTAP test file. Done, like so:\n\n+$node_primary->adjust_conf('postgresql.conf', 'max_connections', '25', 1);\n\n> Absurd nitpick: What's the deal with using \"\" for one part, and '' for the\n> rest?\n\nFixed.\n\n> Separately: I think the case of seeing diffs will be too hard to debug like\n> this, as the difference isn't shown afaict?\n\nSeems to be OK. The output goes to\nsrc/test/recovery/tmp_check/log/regress_log_027_stream_regress, so for\nexample if you comment out the bit that deals with SEQUENCE caching\nyou'll see:\n\n# Running: pg_dump -f\n/usr/home/tmunro/projects/postgresql/src/test/recovery/primary.dump\n--no-sync -p 63693 regression\nok 2 - dump primary server\n# Running: pg_dump -f\n/usr/home/tmunro/projects/postgresql/src/test/recovery/standby.dump\n--no-sync -p 63694 regression\nok 3 - dump standby server\n# Running: diff\n/usr/home/tmunro/projects/postgresql/src/test/recovery/primary.dump\n/usr/home/tmunro/projects/postgresql/src/test/recovery/standby.dump\n436953c436953\n< SELECT pg_catalog.setval('public.clstr_tst_s_rf_a_seq', 32, true);\n---\n> SELECT pg_catalog.setval('public.clstr_tst_s_rf_a_seq', 33, true);\n... more hunks ...\n\nAnd from the previous email:\n\nOn Fri, Dec 10, 2021 at 10:35 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Dec 10, 2021 at 8:38 AM Andres Freund <andres@anarazel.de> wrote:\n> > Personally I'd rather put relative tablespaces into a dedicated directory or\n> > just into pg_tblspc, but without a symlink. Some tools need to understand\n> > tablespace layout etc, and having them in a directory that, by the name, will\n> > also contain other things seems likely to cause confusion.\n\nOk, in this version I have a developer-only GUC\nallow_in_place_tablespaces instead. If you turn it on, you can do:\n\nCREATE TABLESPACE regress_blah LOCATION = '';\n\n... and then pg_tblspc/OID is created directly as a directory. Not\nallowed by default or documented.", "msg_date": "Fri, 10 Dec 2021 12:58:01 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Fri, Dec 10, 2021 at 12:58 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> +$node_primary->adjust_conf('postgresql.conf', 'max_connections', '25', 1);\n\nErm, in fact this requirement came about in an earlier version where I\nwas invoking make and getting --max-concurrent-tests=20 from\nsrc/test/regress/GNUmakefile. Which I should probably replicate\nhere...\n\n\n", "msg_date": "Fri, 10 Dec 2021 13:09:59 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "\nOn 12/9/21 15:15, Thomas Munro wrote:\n> On Fri, Dec 10, 2021 at 2:12 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> The new version appears to set an empty --bindir for pg_regress. Is that\n>> right?\n> It seems to be necessary to find eg psql, since --bindir='' means\n> \"expect $PATH to contain the installed binaries\", and that's working\n> on both build systems. The alternative would be to export yet another\n> environment variable, $PG_INSTALL or such -- do you think that'd be\n> better, or did I miss something that exists already like that?\n\n\n\nNo, that seems ok. Might be worth a comment.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 10 Dec 2021 19:06:08 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "Hi,\n\nOn 2021-12-10 12:58:01 +1300, Thomas Munro wrote:\n> > What's the relation of this to the rest?\n> \n> Someone decided that allow_streaming should imply max_connections =\n> 10, but we need ~20 to run the parallel regression test schedule.\n> However, I can just as easily move that to a local adjustment in the\n> TAP test file. Done, like so:\n> \n> +$node_primary->adjust_conf('postgresql.conf', 'max_connections', '25', 1);\n\nPossible that this will cause problem on some *BSD platform with a limited\ncount of semaphores. But we can deal with that if / when it happens.\n\n\n\n> > Separately: I think the case of seeing diffs will be too hard to debug like\n> > this, as the difference isn't shown afaict?\n> \n> Seems to be OK. The output goes to\n> src/test/recovery/tmp_check/log/regress_log_027_stream_regress, so for\n> example if you comment out the bit that deals with SEQUENCE caching\n> you'll see:\n\nAh, ok. Not sure what I thought there...\n\n\n> On Fri, Dec 10, 2021 at 10:35 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Fri, Dec 10, 2021 at 8:38 AM Andres Freund <andres@anarazel.de> wrote:\n> > > Personally I'd rather put relative tablespaces into a dedicated directory or\n> > > just into pg_tblspc, but without a symlink. Some tools need to understand\n> > > tablespace layout etc, and having them in a directory that, by the name, will\n> > > also contain other things seems likely to cause confusion.\n> \n> Ok, in this version I have a developer-only GUC\n> allow_in_place_tablespaces instead. If you turn it on, you can do:\n> \n> CREATE TABLESPACE regress_blah LOCATION = '';\n\n> ... and then pg_tblspc/OID is created directly as a directory. Not\n> allowed by default or documented.\n\nWFM. I think we might eventually want to allow it by default, but we can deal\nwith that whenever somebody wants to spend the energy doing so.\n\n\n\n> @@ -590,16 +595,35 @@ create_tablespace_directories(const char *location, const Oid tablespaceoid)\n> \tchar\t *linkloc;\n> \tchar\t *location_with_version_dir;\n> \tstruct stat st;\n> +\tbool\t\tin_place;\n> \n> \tlinkloc = psprintf(\"pg_tblspc/%u\", tablespaceoid);\n> -\tlocation_with_version_dir = psprintf(\"%s/%s\", location,\n> +\n> +\t/*\n> +\t * If we're asked to make an 'in place' tablespace, create the directory\n> +\t * directly where the symlink would normally go. This is a developer-only\n> +\t * option for now, to facilitate regression testing.\n> +\t */\n> +\tin_place =\n> +\t\t(allow_in_place_tablespaces || InRecovery) && strlen(location) == 0;\n\nWhy is in_place set to true by InRecovery?\n\nISTM that allow_in_place_tablespaces should be checked in CreateTableSpace(),\nand create_tablespace_directories() should just do whatever it's told?\nOtherwise it seems there's ample potential for confusion, e.g. because of the\nGUC differing between primary and replica, or between crash and a new start.\n\n\n> +\tif (in_place)\n> +\t{\n> +\t\tif (MakePGDirectory(linkloc) < 0 && errno != EEXIST)\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\t(errcode_for_file_access(),\n> +\t\t\t\t\t errmsg(\"could not create directory \\\"%s\\\": %m\",\n> +\t\t\t\t\t\t\tlinkloc)));\n> +\t}\n> +\n> +\tlocation_with_version_dir = psprintf(\"%s/%s\", in_place ? linkloc : location,\n> \t\t\t\t\t\t\t\t\t\t TABLESPACE_VERSION_DIRECTORY);\n> \n> \t/*\n> \t * Attempt to coerce target directory to safe permissions. If this fails,\n> \t * it doesn't exist or has the wrong owner.\n> \t */\n> -\tif (chmod(location, pg_dir_create_mode) != 0)\n> +\tif (!in_place && chmod(location, pg_dir_create_mode) != 0)\n> \t{\n> \t\tif (errno == ENOENT)\n> \t\t\tereport(ERROR,\n\nMaybe add a coment saying that we don't need to chmod here because\nMakePGDirectory() takes care of that?\n\n\n> @@ -648,13 +672,13 @@ create_tablespace_directories(const char *location, const Oid tablespaceoid)\n> \t/*\n> \t * In recovery, remove old symlink, in case it points to the wrong place.\n> \t */\n> -\tif (InRecovery)\n> +\tif (!in_place && InRecovery)\n> \t\tremove_tablespace_symlink(linkloc);\n\nI don't think it's right to check !in_place as you do here, given that it\ncurrently depends on a GUC setting that's possibly differs between WAL\ngeneration and replay time?\n\n\n> --- a/src/test/regress/output/tablespace.source\n> +++ b/src/test/regress/expected/tablespace.out\n> @@ -1,7 +1,18 @@\n> +-- relative tablespace locations are not allowed\n> +CREATE TABLESPACE regress_tblspace LOCATION 'relative'; -- fail\n> +ERROR: tablespace location must be an absolute path\n> +-- empty tablespace locations are not usually allowed\n> +CREATE TABLESPACE regress_tblspace LOCATION ''; -- fail\n> +ERROR: tablespace location must be an absolute path\n> +-- as a special developer-only option to allow us to use tablespaces\n> +-- with streaming replication on the same server, an empty location\n> +-- can be allowed as a way to say that the tablespace should be created\n> +-- as a directory in pg_tblspc, rather than being a symlink\n> +SET allow_in_place_tablespaces = true;\n> -- create a tablespace using WITH clause\n> -CREATE TABLESPACE regress_tblspacewith LOCATION '@testtablespace@' WITH (some_nonexistent_parameter = true); -- fail\n> +CREATE TABLESPACE regress_tblspacewith LOCATION '' WITH (some_nonexistent_parameter = true); -- fail\n> ERROR: unrecognized parameter \"some_nonexistent_parameter\"\n> -CREATE TABLESPACE regress_tblspacewith LOCATION '@testtablespace@' WITH (random_page_cost = 3.0); -- ok\n> +CREATE TABLESPACE regress_tblspacewith LOCATION '' WITH (random_page_cost = 3.0); -- ok\n\nPerhaps also add a test that we catch \"in-place\" tablespace creation with\nallow_in_place_tablespaces = false? Although perhaps that's better done in the\nprevious commit...\n\n\n> +++ b/src/test/modules/test_misc/t/002_tablespace.pl\n\nTwo minor things that I think would be worth testing here:\n1) moving between two \"absolute\" tablespaces. That could conceivably break differently\n between in-place and relative tablespaces.\n2) Moving between absolute and relative tablespace.\n\n\n> +# required for 027_stream_regress.pl\n> +REGRESS_OUTPUTDIR=$(abs_top_builddir)/src/test/recovery\n> +export REGRESS_OUTPUTDIR\n\nWhy do we need this?\n\n\n> +# Initialize primary node\n> +my $node_primary = PostgreSQL::Test::Cluster->new('primary');\n> +$node_primary->init(allows_streaming => 1);\n> +$node_primary->adjust_conf('postgresql.conf', 'max_connections', '25', 1);\n\nProbably should set at least max_prepared_transactions > 0, so the tests\nrequiring prepared xacts can work. They have nontrivial replay routines, so\nthat seems worthwhile?\n\n\n> +# Perform a logical dump of primary and standby, and check that they match\n> +command_ok(\n> +\t[ 'pg_dump', '-f', $outputdir . '/primary.dump', '--no-sync',\n> +\t '-p', $node_primary->port, 'regression' ],\n> +\t'dump primary server');\n> +command_ok(\n> +\t[ 'pg_dump', '-f', $outputdir . '/standby.dump', '--no-sync',\n> +\t '-p', $node_standby_1->port, 'regression' ],\n> +\t'dump standby server');\n> +command_ok(\n> +\t[ 'diff', $outputdir . '/primary.dump', $outputdir . '/standby.dump' ],\n> +\t'compare primary and standby dumps');\n> +\n> +$node_standby_1->stop;\n> +$node_primary->stop;\n\nThis doesn't verify if global objects are replayed correctly. Perhaps it'd be\nbetter to use pg_dumpall?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 10 Dec 2021 16:17:19 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Fri, Dec 10, 2021 at 12:58:01PM +1300, Thomas Munro wrote:\n> -# required for 017_shm.pl\n> +# required for 017_shm.pl and 027_stream_regress.pl\n> REGRESS_SHLIB=$(abs_top_builddir)/src/test/regress/regress$(DLSUFFIX)\n> export REGRESS_SHLIB\n\nHmm. FWIW, I am working on doing similar for pg_upgrade to switch to\nTAP there, and we share a lot in terms of running pg_regress on an\nexising cluster. Wouldn't it be better to move this definition to\nsrc/Makefile.global.in rather than src/test/recovery/?\n\nMy pg_regress command is actually very similar to yours, so I am\nwondering if this would be better if somehow centralized, perhaps in\nCluster.pm.\n--\nMichael", "msg_date": "Wed, 15 Dec 2021 17:50:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Wed, Dec 15, 2021 at 05:50:45PM +0900, Michael Paquier wrote:\n> Hmm. FWIW, I am working on doing similar for pg_upgrade to switch to\n> TAP there, and we share a lot in terms of running pg_regress on an\n> exising cluster. Wouldn't it be better to move this definition to\n> src/Makefile.global.in rather than src/test/recovery/?\n> \n> My pg_regress command is actually very similar to yours, so I am\n> wondering if this would be better if somehow centralized, perhaps in\n> Cluster.pm.\n\nBy the way, while I was sorting out my things, I have noticed that v4\ndoes not handle EXTRA_REGRESS_OPT. Is that wanted? You could just\nadd that into your patch set and push the extra options to the\npg_regress command:\nmy $extra_opts_val = $ENV{EXTRA_REGRESS_OPT} || \"\";\nmy @extra_opts = split(/\\s+/, $extra_opts_val);\n--\nMichael", "msg_date": "Thu, 16 Dec 2021 09:22:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "Hi,\n\nRebased and updated based on feedback. Responses to multiple emails below:\n\nOn Thu, Dec 16, 2021 at 1:22 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, Dec 15, 2021 at 05:50:45PM +0900, Michael Paquier wrote:\n> > Hmm. FWIW, I am working on doing similar for pg_upgrade to switch to\n> > TAP there, and we share a lot in terms of running pg_regress on an\n> > exising cluster. Wouldn't it be better to move this definition to\n> > src/Makefile.global.in rather than src/test/recovery/?\n> >\n> > My pg_regress command is actually very similar to yours, so I am\n> > wondering if this would be better if somehow centralized, perhaps in\n> > Cluster.pm.\n\nThanks for looking. Right, it sounds like you'll have the same\nproblems I ran into. I haven't updated this patch for that yet, as\nI'm not sure exactly what you need and we could easily move it in a\nlater commit. Does that seem reasonable?\n\n> By the way, while I was sorting out my things, I have noticed that v4\n> does not handle EXTRA_REGRESS_OPT. Is that wanted? You could just\n> add that into your patch set and push the extra options to the\n> pg_regress command:\n> my $extra_opts_val = $ENV{EXTRA_REGRESS_OPT} || \"\";\n> my @extra_opts = split(/\\s+/, $extra_opts_val);\n\nSeems like a good idea for consistency, but isn't that a variable\nthat's supposed to be expanded by a shell, not naively split on\nwhitespace? Perhaps we should use the single-argument variant of\nsystem(), so the whole escaped enchilada is passed to a shell? Tried\nlike that in this version (though now I'm wondering what the correct\nperl incantation is to shell-escape $outputdir and $dlpath...)\n\nOn Sat, Dec 11, 2021 at 1:17 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2021-12-10 12:58:01 +1300, Thomas Munro wrote:\n> > +$node_primary->adjust_conf('postgresql.conf', 'max_connections', '25', 1);\n>\n> Possible that this will cause problem on some *BSD platform with a limited\n> count of semaphores. But we can deal with that if / when it happens.\n\nRight, those systems don't work out of the box for us already without\nsysctl tweaks, so it doesn't matter if animals have to be adjusted\nfurther.\n\n> > @@ -590,16 +595,35 @@ create_tablespace_directories(const char *location, const Oid tablespaceoid)\n> > char *linkloc;\n> > char *location_with_version_dir;\n> > struct stat st;\n> > + bool in_place;\n> >\n> > linkloc = psprintf(\"pg_tblspc/%u\", tablespaceoid);\n> > - location_with_version_dir = psprintf(\"%s/%s\", location,\n> > +\n> > + /*\n> > + * If we're asked to make an 'in place' tablespace, create the directory\n> > + * directly where the symlink would normally go. This is a developer-only\n> > + * option for now, to facilitate regression testing.\n> > + */\n> > + in_place =\n> > + (allow_in_place_tablespaces || InRecovery) && strlen(location) == 0;\n>\n> Why is in_place set to true by InRecovery?\n\nWell the real condition is strlen(location) == 0, and the other part\nis a sort of bit belt-and-braces check, but yeah, I should just remove\nthat part. Done.\n\n> ISTM that allow_in_place_tablespaces should be checked in CreateTableSpace(),\n> and create_tablespace_directories() should just do whatever it's told?\n> Otherwise it seems there's ample potential for confusion, e.g. because of the\n> GUC differing between primary and replica, or between crash and a new start.\n\nAgreed, that was the effect but the extra unnecessary check was a bit confusing.\n\n> > /*\n> > * Attempt to coerce target directory to safe permissions. If this fails,\n> > * it doesn't exist or has the wrong owner.\n> > */\n> > - if (chmod(location, pg_dir_create_mode) != 0)\n> > + if (!in_place && chmod(location, pg_dir_create_mode) != 0)\n> > {\n> > if (errno == ENOENT)\n> > ereport(ERROR,\n>\n> Maybe add a coment saying that we don't need to chmod here because\n> MakePGDirectory() takes care of that?\n\nDone.\n\n> > @@ -648,13 +672,13 @@ create_tablespace_directories(const char *location, const Oid tablespaceoid)\n> > /*\n> > * In recovery, remove old symlink, in case it points to the wrong place.\n> > */\n> > - if (InRecovery)\n> > + if (!in_place && InRecovery)\n> > remove_tablespace_symlink(linkloc);\n>\n> I don't think it's right to check !in_place as you do here, given that it\n> currently depends on a GUC setting that's possibly differs between WAL\n> generation and replay time?\n\nI have to, because otherwise we'll remove the directory we just\ncreated at the top of the function. It doesn't really depend on a GUC\n(clearer after previous change).\n\n> Perhaps also add a test that we catch \"in-place\" tablespace creation with\n> allow_in_place_tablespaces = false? Although perhaps that's better done in the\n> previous commit...\n\nThere was a test for that already, see this bit:\n\n+-- empty tablespace locations are not usually allowed\n+CREATE TABLESPACE regress_tblspace LOCATION ''; -- fail\n+ERROR: tablespace location must be an absolute path\n\n> > +++ b/src/test/modules/test_misc/t/002_tablespace.pl\n>\n> Two minor things that I think would be worth testing here:\n> 1) moving between two \"absolute\" tablespaces. That could conceivably break differently\n> between in-place and relative tablespaces.\n> 2) Moving between absolute and relative tablespace.\n\nDone.\n\n> > +# required for 027_stream_regress.pl\n> > +REGRESS_OUTPUTDIR=$(abs_top_builddir)/src/test/recovery\n> > +export REGRESS_OUTPUTDIR\n>\n> Why do we need this?\n\nThe Make macro \"prove_check\" (src/Makefile.global.in) always changes\nto the source directory to run TAP tests. Without an explicit\ndirective to control where regression test output goes, it got\nsplattered all over the source tree in VPATH builds. I didn't see an\nexisting way to adjust that (did I miss something?). Hence desire to\npass down a path in the build tree. Better ideas welcome.\n\n> > +# Initialize primary node\n> > +my $node_primary = PostgreSQL::Test::Cluster->new('primary');\n> > +$node_primary->init(allows_streaming => 1);\n> > +$node_primary->adjust_conf('postgresql.conf', 'max_connections', '25', 1);\n>\n> Probably should set at least max_prepared_transactions > 0, so the tests\n> requiring prepared xacts can work. They have nontrivial replay routines, so\n> that seems worthwhile?\n\nGood idea. Done.\n\n> > +# Perform a logical dump of primary and standby, and check that they match\n> > +command_ok(\n> > + [ 'pg_dump', '-f', $outputdir . '/primary.dump', '--no-sync',\n> > + '-p', $node_primary->port, 'regression' ],\n> > + 'dump primary server');\n> > +command_ok(\n> > + [ 'pg_dump', '-f', $outputdir . '/standby.dump', '--no-sync',\n> > + '-p', $node_standby_1->port, 'regression' ],\n> > + 'dump standby server');\n> > +command_ok(\n> > + [ 'diff', $outputdir . '/primary.dump', $outputdir . '/standby.dump' ],\n> > + 'compare primary and standby dumps');\n> > +\n> > +$node_standby_1->stop;\n> > +$node_primary->stop;\n>\n> This doesn't verify if global objects are replayed correctly. Perhaps it'd be\n> better to use pg_dumpall?\n\nGood idea. Done.", "msg_date": "Wed, 22 Dec 2021 11:41:42 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Wed, Dec 22, 2021 at 11:41 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Rebased and updated based on feedback. Responses to multiple emails below:\n\nPushed, but the build farm doesn't like it with a couple of different\nways of failing. I'll collect some results and revert shortly.\n\n\n", "msg_date": "Sat, 15 Jan 2022 00:39:48 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Sat, Jan 15, 2022 at 12:39 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Dec 22, 2021 at 11:41 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Rebased and updated based on feedback. Responses to multiple emails below:\n>\n> Pushed, but the build farm doesn't like it with a couple of different\n> ways of failing. I'll collect some results and revert shortly.\n\nProblems:\n\n1. The way I invoke pg_regress doesn't seem to work correctly under\nthe build farm client (though it works fine under make), not sure why\nyet, but reproduced here and will figure it out tomorrow.\n2. The new test in src/test/modules/t/002_tablespace.pl apparently has\nsome path-related problem on Windows that I didn't know about, because\nCI isn't even running the TAP tests under src/test/module/test_misc\n(and various other locations), but the BF is :-/ And I was happy\nbecause modulescheck was passing...\n\nI reverted the two commits responsible for those failures to keep the\nbuild farm green, and I'll try to fix them tomorrow.\n\n\n", "msg_date": "Sat, 15 Jan 2022 02:32:35 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "Hi,\n\nOn 2022-01-15 02:32:35 +1300, Thomas Munro wrote:\n> 1. The way I invoke pg_regress doesn't seem to work correctly under\n> the build farm client (though it works fine under make), not sure why\n> yet, but reproduced here and will figure it out tomorrow.\n\nI think it's just a problem of the buildfarm specifying port names in\nextra_opts. E.g.\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=eelpout&dt=2022-01-14%2011%3A49%3A36\nhas\n\n# Checking port 58074\n# Found port 58074\nName: primary\n...\n# Running: /home/tmunro/build-farm/buildroot/HEAD/pgsql.build/src/test/recovery/../../../src/test/regress/pg_regress --dlpath=\"/home/tmunro/build-farm/buildroot/HEAD/pgsql.build/src/test/regress\" --bindir= --port=58074 --schedule=../regress/parallel_schedule --max-concurrent-tests=20 --inputdir=../regress --outputdir=\"/home/tmunro/build-farm/buildroot/HEAD/pgsql.build/src/test/recovery\" --port=5678\n(using postmaster on /tmp/1W6qVPVyCv, port 5678)\n\nNote how there's both --port=58074 and --port=5678 in the pg_regress\ninvocation. The latter comming from EXTRA_REGRESS_OPTS, which the buildfarm\nclient sets.\n\nThe quickest fix would probably be to just move the 027_stream_regress.pl\nadded --port until after $extra_opts?\n\n\n> 2. The new test in src/test/modules/t/002_tablespace.pl apparently has\n> some path-related problem on Windows\n\nThis is the failure you're talking about?\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2022-01-14%2012%3A04%3A55\n\n\n> that I didn't know about, because CI isn't even running the TAP tests under\n> src/test/module/test_misc (and various other locations), but the BF is :-/\n> And I was happy because modulescheck was passing...\n\nThis we need to fix... But if you're talking about fairywren's failure, it's\nmore than not running some tests, it's that we do not test windows mingw\noutside of cross compilation.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 14 Jan 2022 15:49:47 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Sat, Jan 15, 2022 at 12:49 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-01-15 02:32:35 +1300, Thomas Munro wrote:\n> > 1. The way I invoke pg_regress doesn't seem to work correctly under\n> > the build farm client (though it works fine under make), not sure why\n> > yet, but reproduced here and will figure it out tomorrow.\n>\n> I think it's just a problem of the buildfarm specifying port names in\n> extra_opts. E.g.\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=eelpout&dt=2022-01-14%2011%3A49%3A36\n> has\n>\n> # Checking port 58074\n> # Found port 58074\n> Name: primary\n> ...\n> # Running: /home/tmunro/build-farm/buildroot/HEAD/pgsql.build/src/test/recovery/../../../src/test/regress/pg_regress --dlpath=\"/home/tmunro/build-farm/buildroot/HEAD/pgsql.build/src/test/regress\" --bindir= --port=58074 --schedule=../regress/parallel_schedule --max-concurrent-tests=20 --inputdir=../regress --outputdir=\"/home/tmunro/build-farm/buildroot/HEAD/pgsql.build/src/test/recovery\" --port=5678\n> (using postmaster on /tmp/1W6qVPVyCv, port 5678)\n>\n> Note how there's both --port=58074 and --port=5678 in the pg_regress\n> invocation. The latter comming from EXTRA_REGRESS_OPTS, which the buildfarm\n> client sets.\n>\n> The quickest fix would probably be to just move the 027_stream_regress.pl\n> added --port until after $extra_opts?\n\nThanks, I figured it was an environment variable biting me, and indeed\nit was that one. I reordered the arguments, tested locally under the\nbuildfarm client script, and pushed. I'll keep an eye on the build\nfarm.\n\nOne thing I noticed is that the pg_dump output files should really be\nrm'd by the clean target; I may push something for that later.\n\n> > 2. The new test in src/test/modules/t/002_tablespace.pl apparently has\n> > some path-related problem on Windows\n>\n> This is the failure you're talking about?\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2022-01-14%2012%3A04%3A55\n>\n> > that I didn't know about, because CI isn't even running the TAP tests under\n> > src/test/module/test_misc (and various other locations), but the BF is :-/\n> > And I was happy because modulescheck was passing...\n>\n> This we need to fix... But if you're talking about fairywren's failure, it's\n> more than not running some tests, it's that we do not test windows mingw\n> outside of cross compilation.\n\nI'm temporarily stumped by complete ignorance of MSYS. I tried the\ntest on plain old Windows/MSVC by cherry-picking the reverted commit\nd1511fe1 and running .\\src\\tools\\msvc\\vcregress.bat taptest\n.\\src\\test\\modules\\test_misc in my Windows 10 VM, and that passed with\nflying colours (so Windows CI would have passed too, if we weren't\nignoring TAP tests in unusual locations, I assume). I'll look into\ninstalling MSYS to work this out if necessary, but it may take me a\nfew days.\n\nHere's how it failed on fairywren, in case someone knowledgeable of\nMSYS path translation etc can spot the problem:\n\npsql:<stdin>:1: ERROR: directory\n\"/home/pgrunner/bf/root/HEAD/pgsql.build/src/test/modules/test_misc/tmp_check/t_002_tablespace_main_data/ts1\"\ndoes not exist\nnot ok 1 - create tablespace with absolute path\n\nI think that means chmod() failed with ENOENT. That's weird, because\nthe .pl does:\n\n+my $TS1_LOCATION = $node->basedir() . \"/ts1\";\n+mkdir($TS1_LOCATION);\n\n\n", "msg_date": "Mon, 17 Jan 2022 17:25:19 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Mon, Jan 17, 2022 at 05:25:19PM +1300, Thomas Munro wrote:\n> Here's how it failed on fairywren, in case someone knowledgeable of\n> MSYS path translation etc can spot the problem:\n> \n> psql:<stdin>:1: ERROR: directory\n> \"/home/pgrunner/bf/root/HEAD/pgsql.build/src/test/modules/test_misc/tmp_check/t_002_tablespace_main_data/ts1\"\n> does not exist\n> not ok 1 - create tablespace with absolute path\n> \n> I think that means chmod() failed with ENOENT. That's weird, because\n> the .pl does:\n> \n> +my $TS1_LOCATION = $node->basedir() . \"/ts1\";\n> +mkdir($TS1_LOCATION);\n\nYou likely need a PostgreSQL::Test::Utils::perl2host() call. MSYS Perl\nunderstands Cygwin-style names like /home/... as well as Windows-style names,\nbut this PostgreSQL configuration understands only Windows-style names.\n\n\n", "msg_date": "Sun, 16 Jan 2022 21:53:26 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Mon, Jan 17, 2022 at 6:53 PM Noah Misch <noah@leadboat.com> wrote:\n> On Mon, Jan 17, 2022 at 05:25:19PM +1300, Thomas Munro wrote:\n> > Here's how it failed on fairywren, in case someone knowledgeable of\n> > MSYS path translation etc can spot the problem:\n\n> You likely need a PostgreSQL::Test::Utils::perl2host() call. MSYS Perl\n> understands Cygwin-style names like /home/... as well as Windows-style names,\n> but this PostgreSQL configuration understands only Windows-style names.\n\nThanks. I added that and pushed. Let's see if fairywren likes it\nwhen it comes back online.\n\nI also learned that in the CI environment, node->basedir() is a path\ncontaining an internal \".\" component (I mean \"something/./something\").\nI added a regex to collapse those, because they're unacceptable in\nWindows junction point targets. I'm aware that there is something\nhappening in another CF entry that might address that sort of\nthing[1], so then perhaps I could remove the kludge.\n\nI tested that with a throw-away change to .cirrus.yml, like so. The\nCI thread[2] is discussing a proper solution to these Windows CI blind\nspots.\n\n test_modules_script:\n - perl src/tools/msvc/vcregress.pl modulescheck\n+ - perl src/tools/msvc/vcregress.pl taptest ./src/test/modules/test_misc\n\n[1] https://commitfest.postgresql.org/36/3331/\n[2] https://www.postgresql.org/message-id/20220114235457.GQ14051%40telsasoft.com\n\n\n", "msg_date": "Fri, 21 Jan 2022 15:42:26 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Fri, Jan 21, 2022 at 3:42 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Thanks. I added that and pushed. Let's see if fairywren likes it\n> when it comes back online.\n\nA watched pot never boils, but I wonder why Andrew's 4 Windows\nconfigurations jacana, bowerbird, fairywren and drongo have stopped\nreturning results.\n\n\n", "msg_date": "Sat, 22 Jan 2022 07:58:16 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "Hi,\n\nOn 2022-01-17 17:25:19 +1300, Thomas Munro wrote:\n> I reordered the arguments, tested locally under the buildfarm client script,\n> and pushed. I'll keep an eye on the build farm.\n\nAfter the reloptions fix the tests seem much more likely to succeed than\nbefore. Progress!\n\nUnfortunately we don't quite seem there yet:\n\nI saw a couple failures like:\nhttps://api.cirrus-ci.com/v1/artifact/task/5394938773897216/regress_diffs/build/testrun/recovery/t/027_stream_regress/regression.diffs\n(from https://cirrus-ci.com/task/5394938773897216?logs=check_world#L183 )\n\n -- Should succeed\n DROP TABLESPACE regress_tblspace_renamed;\n+ERROR: tablespace \"regress_tblspace_renamed\" is not empty\n\n\nI assume the reason we see this semi-regularly when the regression tests run\nas part of 027_stream_regress, but not in the main regression test run, is\nsimilar to the reloptions problem, namely that we run with a much smaller\nshared buffers.\n\nI assume what happens is that this just makes the known problem of bgwriter or\nsome other process keeping open a filehandle to an already deleted relation,\npreventing the deletion to \"fully\" take effect, worse.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 21 Jan 2022 11:48:27 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Sat, Jan 22, 2022 at 8:48 AM Andres Freund <andres@anarazel.de> wrote:\n> Unfortunately we don't quite seem there yet:\n>\n> I saw a couple failures like:\n> https://api.cirrus-ci.com/v1/artifact/task/5394938773897216/regress_diffs/build/testrun/recovery/t/027_stream_regress/regression.diffs\n> (from https://cirrus-ci.com/task/5394938773897216?logs=check_world#L183 )\n>\n> -- Should succeed\n> DROP TABLESPACE regress_tblspace_renamed;\n> +ERROR: tablespace \"regress_tblspace_renamed\" is not empty\n>\n>\n> I assume the reason we see this semi-regularly when the regression tests run\n> as part of 027_stream_regress, but not in the main regression test run, is\n> similar to the reloptions problem, namely that we run with a much smaller\n> shared buffers.\n>\n> I assume what happens is that this just makes the known problem of bgwriter or\n> some other process keeping open a filehandle to an already deleted relation,\n> preventing the deletion to \"fully\" take effect, worse.\n\nRight, I assume this would be fixed by [1]. I need to re-convince\nmyself of that patch's correctness and make some changes after\nRobert's feedback; I'll look into committing it next week. From a\ncertain point of view it's now quite good that we hit this case\noccasionally in CI.\n\n[1] https://commitfest.postgresql.org/36/2962/\n\n\n", "msg_date": "Sat, 22 Jan 2022 09:07:31 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "\nOn 1/21/22 13:58, Thomas Munro wrote:\n> On Fri, Jan 21, 2022 at 3:42 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> Thanks. I added that and pushed. Let's see if fairywren likes it\n>> when it comes back online.\n> A watched pot never boils, but I wonder why Andrew's 4 Windows\n> configurations jacana, bowerbird, fairywren and drongo have stopped\n> returning results.\n\n\n\nI think I have unstuck both machines. I will keep an eye on them and\nmake sure they don't get stuck again.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 21 Jan 2022 16:22:05 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Sat, Jan 22, 2022 at 8:48 AM Andres Freund <andres@anarazel.de> wrote:\n> Unfortunately we don't quite seem there yet:\n\nAnd another way to fail:\n\npg_dump: error: query failed: ERROR: canceling statement due to\nconflict with recovery\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dangomushi&dt=2022-01-22%2003%3A06%3A42\n\nProbably needs hot_standby_feedback on. Will adjust this soon.\n\nOne more failure seen in today's crop was a \"stats\" failure on\nseawasp, which must be the well known pre-existing problem. (Probably\njust needs someone to rewrite the stats subsystem to use shared memory\ninstead of UDP).\n\n\n", "msg_date": "Sat, 22 Jan 2022 18:00:44 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "\nOn 1/21/22 16:22, Andrew Dunstan wrote:\n> On 1/21/22 13:58, Thomas Munro wrote:\n>> On Fri, Jan 21, 2022 at 3:42 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>>> Thanks. I added that and pushed. Let's see if fairywren likes it\n>>> when it comes back online.\n>> A watched pot never boils, but I wonder why Andrew's 4 Windows\n>> configurations jacana, bowerbird, fairywren and drongo have stopped\n>> returning results.\n>\n>\n> I think I have unstuck both machines. I will keep an eye on them and\n> make sure they don't get stuck again.\n>\n>\n\nfairywren is not happy with the recovery tests still.\n\n\nI have noticed on a different setup that this test adds 11 minutes to\nthe runtime of the recovery tests, effectively doubling it. The doubling\nis roughly true on faster setups, too. At least I would like a simple\nway to disable the test.\n\n\nI'm not very happy about that.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 27 Jan 2022 15:27:17 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "Hi,\n\nOn 2022-01-27 15:27:17 -0500, Andrew Dunstan wrote:\n> fairywren is not happy with the recovery tests still.\n\nAny more details?\n\n\n> I have noticed on a different setup that this test adds 11 minutes to the\n> runtime of the recovery tests, effectively doubling it. The doubling is\n> roughly true on faster setups, too\n\nDoes a normal regress run take roughly that long? Or is the problem that the\n027_stream_regress.pl ends up defaulting to shared_buffers=1MB, causing lots\nof unnecessary IO?\n\n\n> . At least I would like a simple\n> way to disable the test.\n\nOne thing we could do to speed up the overall runtime would be to move\n027_stream_regress.pl to something numbered earlier. Combined with\nPROVE_FLAGS=-j2 that could at least run them in parallel with the rest of the\nrecovery tests.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 27 Jan 2022 12:47:08 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Fri, Jan 28, 2022 at 9:27 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> I have noticed on a different setup that this test adds 11 minutes to\n> the runtime of the recovery tests, effectively doubling it. The doubling\n> is roughly true on faster setups, too. At least I would like a simple\n> way to disable the test.\n\nOuch, that's ... a lot. Some randomly selected times: ~20 seconds\n(dev machines), ~40 seconds (Cirrus CI's Windows image), ~2-3 minutes\n(very cheap cloud host accounts), ~3 minutes (my Rapsberry Pi pinned\nonto two CPU cores), ~11 minutes (your Windows number). It would be\ngood to understand why that's such an outlier.\n\nRe skipping, I've also been wondering about an exclusion list to skip\nparts of the regression tests that don't really add recovery coverage\nbut take non-trivial time, like the join tests.\n\n\n", "msg_date": "Fri, 28 Jan 2022 10:41:21 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "Hi,\n\nOn 2022-01-27 12:47:08 -0800, Andres Freund wrote:\n> > I have noticed on a different setup that this test adds 11 minutes to the\n> > runtime of the recovery tests, effectively doubling it. The doubling is\n> > roughly true on faster setups, too\n>\n> Does a normal regress run take roughly that long? Or is the problem that the\n> 027_stream_regress.pl ends up defaulting to shared_buffers=1MB, causing lots\n> of unnecessary IO?\n\nIn my msys install a normal regress run takes 57s, 027_stream_regress.pl takes\n194s.\n\nIt's *not* shared_buffers. Or any of the other postgresql.conf settings. As\nfar as I can tell.\n\n< tries a bunch of things >\n\nARGH. It's the utterly broken handling of refused connections on windows. The\npg_regress invocation doesn't specify the host address, just the port.\n\nNow you might reasonably ask, why does that slow things down so much, rather\nthan working or not working? The problem is that a tcp connect() on windows\ndoesn't immediately fail when a connection establishment is rejected, but\ninstead internally retries several times. Which takes 2s. The reason there\nare rejected connections without specifying the host is that Cluster.pm\nconfigures to listen to 127.0.0.1. But the default for libpq/psql is to try\n\"localhost\". Which name resolution returns first as ipv6 (i.e. ::1). Which\ntakes 2s to fail, upon which libpq goes and tries 127.0.0.1, which works.\n\nThat means every single psql started by 027_stream_regress.pl's pg_regress\ntakes 2s. Which of course adds up...\n\nI'll go and sob in a corner.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 27 Jan 2022 14:03:51 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On 2022-01-27 14:03:51 -0800, Andres Freund wrote:\n> In my msys install a normal regress run takes 57s, 027_stream_regress.pl takes\n> 194s.\n>\n> That means every single psql started by 027_stream_regress.pl's pg_regress\n> takes 2s. Which of course adds up...\n\nOh, forgot: After adding --host to the pg_regress invocation\n027_stream_regress.pl takes 75s (from 194s before).\n\n\n", "msg_date": "Thu, 27 Jan 2022 14:07:55 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "\nOn 1/27/22 15:47, Andres Freund wrote:\n> Hi,\n>\n> On 2022-01-27 15:27:17 -0500, Andrew Dunstan wrote:\n>> fairywren is not happy with the recovery tests still.\n> Any more details?\n\n\n\nI'll go back and get some.\n\n\n>\n>\n>> I have noticed on a different setup that this test adds 11 minutes to the\n>> runtime of the recovery tests, effectively doubling it. The doubling is\n>> roughly true on faster setups, too\n> Does a normal regress run take roughly that long? Or is the problem that the\n> 027_stream_regress.pl ends up defaulting to shared_buffers=1MB, causing lots\n> of unnecessary IO?\n\n\nOn crake (slowish fedora 34), a normal check run took 95s, and this test\ntook 114s. On my windows test instance where I noticed this (w10,\nmsys2/ucrt), check took 516s and this test took 685s.\n\n\n>\n>\n>> . At least I would like a simple\n>> way to disable the test.\n> One thing we could do to speed up the overall runtime would be to move\n> 027_stream_regress.pl to something numbered earlier. Combined with\n> PROVE_FLAGS=-j2 that could at least run them in parallel with the rest of the\n> recovery tests.\n>\n>\n\nSeems like a bandaid.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 27 Jan 2022 17:16:17 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Fri, Jan 28, 2022 at 11:03 AM Andres Freund <andres@anarazel.de> wrote:\n> That means every single psql started by 027_stream_regress.pl's pg_regress\n> takes 2s. Which of course adds up...\n\nThat is very surprising, thanks. Will fix.\n\nI've been experimenting with reusing psql sessions and backends for\nqueries in TAP tests, since some Windows animals seem to take a\nsignificant fraction of a second *per query* due to forking and\nstartup costs. ~100ms or whatever is nothing compared to that ~2000ms\nsilliness, but it still adds up over thousands of queries. I'll post\nan experimental patch soon, but this discussion has given me the idea\nthat pg_regress might ideally be able to reuse processes too, at least\nsometimes...\n\n\n", "msg_date": "Fri, 28 Jan 2022 11:21:58 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "Hi,\n\nOn 2022-01-27 17:16:17 -0500, Andrew Dunstan wrote:\n> On crake (slowish fedora 34), a normal check run took 95s, and this test\n> took 114s.\n\nThat's roughly what I see on msys after the fix.\n\n\n> On my windows test instance where I noticed this (w10,\n> msys2/ucrt), check took 516s and this test took 685s.\n\nHm. That's both excruciatingly slow. Way way slower than what I see here, also\nw10, msys2/ucrt. Any chance the test instance has windows defender running,\nwithout a directory exclusion? I saw that trash performance to a near\nstandstill.\n\nDoes it get better with the attached patch?\n\n\nI was confused why this didn't fail fatally on CI, which uses\nPG_TEST_USE_UNIX_SOCKETS. I think he reason is that pg_regress' use of PGHOST\nis busted, btw. It says it'll use PGHOST if --host isn't specified, but it\ndoesn't work.\n\n\n\t\t * When testing an existing install, we honor existing environment\n\t\t * variables, except if they're overridden by command line options.\n\t\t */\n\t\tif (hostname != NULL)\n\t\t{\n\t\t\tsetenv(\"PGHOST\", hostname, 1);\n\t\t\tunsetenv(\"PGHOSTADDR\");\n\t\t}\n\nbut hostname is initialized in the existing-install case:\n\n#if !defined(HAVE_UNIX_SOCKETS)\n\tuse_unix_sockets = false;\n#elif defined(WIN32)\n\n\t/*\n\t * We don't use Unix-domain sockets on Windows by default, even if the\n\t * build supports them. (See comment at remove_temp() for a reason.)\n\t * Override at your own risk.\n\t */\n\tuse_unix_sockets = getenv(\"PG_TEST_USE_UNIX_SOCKETS\") ? true : false;\n#else\n\tuse_unix_sockets = true;\n#endif\n\n\tif (!use_unix_sockets)\n\t\thostname = \"localhost\";\n\n\n\nGreetings,\n\nAndres Freund", "msg_date": "Thu, 27 Jan 2022 14:36:32 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "\nOn 1/27/22 15:47, Andres Freund wrote:\n> Hi,\n>\n> On 2022-01-27 15:27:17 -0500, Andrew Dunstan wrote:\n>> fairywren is not happy with the recovery tests still.\n> Any more details?\n\n\n(Not actually fairywren, but equivalent) It's hung at\nsrc/test/recovery/t/009_twophase.pl line 84:\n\n\n $psql_rc = $cur_primary->psql('postgres', \"COMMIT PREPARED\n 'xact_009_1'\");\n\n\nThis is an Amazon EC2 WS2019 instance, of type t3.large i.e. 8Gb of\nmemory (not the same machine I reported test times from). Perhaps I need\nto test on another instance. Note though that when I tested with a\nucrt64 build, including use of the ucrt64 perl/prove, the recovery test\npassed on an equivalent instance, so that's probably another reason to\nswitch fairywren to using the ucrt64 environment.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 27 Jan 2022 17:51:52 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "Hi,\n\nOn 2022-01-27 14:36:32 -0800, Andres Freund wrote:\n> > On my windows test instance where I noticed this (w10,\n> > msys2/ucrt), check took 516s and this test took 685s.\n> \n> Hm. That's both excruciatingly slow. Way way slower than what I see here, also\n> w10, msys2/ucrt. Any chance the test instance has windows defender running,\n> without a directory exclusion? I saw that trash performance to a near\n> standstill.\n\nCould you post the regression test output with the timings? Unless it's AV, I\ndon't see why a windows VM with a moderate amount of memory should take that\nlong.\n\nDo the test times get less bad if you use PG_TEST_USE_UNIX_SOCKETS=1\nPG_REGRESS_SOCK_DIR: \"c:/some-dir/\"?\n\n\nI see there's reports that the connection-timeout problem can be a lot worse\non windows, because several applications, e.g. docker, add additional names\nfor localhost. Are there any non-commented entries in\nC:\\Windows\\System32\\drivers\\etc\\hosts\n\n\n> Does it get better with the attached patch?\n\nI pushed something like it now - seemed to be no reason to wait, given it\nmakes think less slow on my VM.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 27 Jan 2022 14:59:26 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "Hi,\n\nOn 2022-01-27 17:51:52 -0500, Andrew Dunstan wrote:\n> (Not actually fairywren, but equivalent) It's hung at\n> src/test/recovery/t/009_twophase.pl line 84:\n>\n>\n> $psql_rc = $cur_primary->psql('postgres', \"COMMIT PREPARED\n> 'xact_009_1'\");\n\nThat very likely is the socket-shutdown bug that lead to:\n\ncommit 64b2c6507e5714b5c688b9c5cc551fbedb7b3b58\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: 2022-01-25 12:17:40 -0500\n\n Revert \"graceful shutdown\" changes for Windows, in back branches only.\n\n This reverts commits 6051857fc and ed52c3707, but only in the back\n branches. Further testing has shown that while those changes do fix\n some things, they also break others; in particular, it looks like\n walreceivers fail to detect walsender-initiated connection close\n reliably if the walsender shuts down this way. We'll keep trying to\n improve matters in HEAD, but it now seems unwise to push these changes\n into stable releases.\n\n Discussion: https://postgr.es/m/CA+hUKG+OeoETZQ=Qw5Ub5h3tmwQhBmDA=nuNO3KG=zWfUypFAw@mail.gmail.com\n\nIf you apply that commit, does the problem go away?\n\n\nThat's why I'd suggested to revert them in\nhttps://postgr.es/m/20220125023609.5ohu3nslxgoygihl%40alap3.anarazel.de\n\n\n> This is an Amazon EC2 WS2019 instance, of type t3.large i.e. 8Gb of\n> memory (not the same machine I reported test times from). Perhaps I need\n> to test on another instance. Note though that when I tested with a\n> ucrt64 build, including use of the ucrt64 perl/prove, the recovery test\n> passed on an equivalent instance, so that's probably another reason to\n> switch fairywren to using the ucrt64 environment.\n\nWithout the revert I do get through the tests some of the time - imo likely\nthat the hang isn't related to the specific msys/mingw environment.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 27 Jan 2022 15:03:57 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Fri, Jan 28, 2022 at 12:03 PM Andres Freund <andres@anarazel.de> wrote:\n> Revert \"graceful shutdown\" changes for Windows, in back branches only.\n\nFTR I'm actively working on a fix for that one for master now (see\nthat other thread where the POC survived Alexander's torture testing).\n\n\n", "msg_date": "Fri, 28 Jan 2022 12:24:20 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "\nOn 1/27/22 18:24, Thomas Munro wrote:\n> On Fri, Jan 28, 2022 at 12:03 PM Andres Freund <andres@anarazel.de> wrote:\n>> Revert \"graceful shutdown\" changes for Windows, in back branches only.\n> FTR I'm actively working on a fix for that one for master now (see\n> that other thread where the POC survived Alexander's torture testing).\n\n\n\nOK, good. A further data point on that: I am not seeing a recovery test\nhang or commit_ts test failure on real W10 machines, including jacana. I\nam only getting them on WS2019 VMs e.g. drongo/fairywren.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 28 Jan 2022 09:46:02 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Sat, Jan 22, 2022 at 6:00 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sat, Jan 22, 2022 at 8:48 AM Andres Freund <andres@anarazel.de> wrote:\n> > Unfortunately we don't quite seem there yet:\n>\n> And another way to fail:\n>\n> pg_dump: error: query failed: ERROR: canceling statement due to\n> conflict with recovery\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dangomushi&dt=2022-01-22%2003%3A06%3A42\n>\n> Probably needs hot_standby_feedback on. Will adjust this soon.\n\nSeen again today on prairiedog. Erm, scratch that idea, HS feedback\ninterferes with test results. I guess max_standby_streaming_delay\nshould be increased to 'forever', like in the attached, since pg_dump\nruns for a very long time on prairiedog:\n\n2022-02-01 04:47:59.294 EST [3670:15] 027_stream_regress.pl LOG:\nstatement: SET TRANSACTION ISOLATION LEVEL REPEATABLE READ, READ ONLY\n...\n2022-02-01 04:49:09.881 EST [3683:2585] 027_stream_regress.pl ERROR:\ncanceling statement due to conflict with recovery", "msg_date": "Wed, 2 Feb 2022 13:59:56 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "Hi,\n\nOn 2022-02-02 13:59:56 +1300, Thomas Munro wrote:\n> Seen again today on prairiedog. Erm, scratch that idea, HS feedback\n> interferes with test results.\n\nIt'd not be sufficient anyway, I think. E.g. autovacuum truncating a table\nwould not be prevented by hs_f I think?\n\n\n> I guess max_standby_streaming_delay\n> should be increased to 'forever', like in the attached\n\nSeems reasonable.\n\n\n> , since pg_dump runs for a very long time on prairiedog:\n\n> 2022-02-01 04:47:59.294 EST [3670:15] 027_stream_regress.pl LOG:\n> statement: SET TRANSACTION ISOLATION LEVEL REPEATABLE READ, READ ONLY\n> ...\n> 2022-02-01 04:49:09.881 EST [3683:2585] 027_stream_regress.pl ERROR:\n> canceling statement due to conflict with recovery\n\nThat, uh, seems slow. Is it perhaps waiting for a lock? Seems\nCluster.pm::init() should add at least log_lock_waits...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 1 Feb 2022 17:14:02 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Wed, Feb 2, 2022 at 2:14 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-02-02 13:59:56 +1300, Thomas Munro wrote:\n> > 2022-02-01 04:47:59.294 EST [3670:15] 027_stream_regress.pl LOG:\n> > statement: SET TRANSACTION ISOLATION LEVEL REPEATABLE READ, READ ONLY\n> > ...\n> > 2022-02-01 04:49:09.881 EST [3683:2585] 027_stream_regress.pl ERROR:\n> > canceling statement due to conflict with recovery\n>\n> That, uh, seems slow. Is it perhaps waiting for a lock? Seems\n> Cluster.pm::init() should add at least log_lock_waits...\n\nI quoted the wrong lines, let me try that again this time for the same\nsession, the one with pid 3683:\n\n2022-02-01 04:48:38.352 EST [3683:15] 027_stream_regress.pl LOG:\nstatement: SET TRANSACTION ISOLATION LEVEL REPEATABLE READ, READ ONLY\n...\n2022-02-01 04:49:09.881 EST [3683:2585] 027_stream_regress.pl ERROR:\ncanceling statement due to conflict with recovery\n\nIt looks like it's processing statements fairly consistently slowly\nthrough the whole period. Each non-trivial statement takes a bit\nunder ~10ms, so it would make sense if by the time we've processed\n~2.5k lines we've clocked up 30 seconds and a VACUUM replay whacks us.\n\n\n", "msg_date": "Wed, 2 Feb 2022 14:35:02 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> It looks like it's processing statements fairly consistently slowly\n> through the whole period. Each non-trivial statement takes a bit\n> under ~10ms, so it would make sense if by the time we've processed\n> ~2.5k lines we've clocked up 30 seconds and a VACUUM replay whacks us.\n\nThis test is set up to time out after 30 seconds? We've long had\nan unofficial baseline that no timeouts under 180 seconds should\nbe used in the buildfarm.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 01 Feb 2022 21:11:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "Hi, \n\nOn February 1, 2022 6:11:24 PM PST, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Thomas Munro <thomas.munro@gmail.com> writes:\n>> It looks like it's processing statements fairly consistently slowly\n>> through the whole period. Each non-trivial statement takes a bit\n>> under ~10ms, so it would make sense if by the time we've processed\n>> ~2.5k lines we've clocked up 30 seconds and a VACUUM replay whacks us.\n>\n>This test is set up to time out after 30 seconds? We've long had\n>an unofficial baseline that no timeouts under 180 seconds should\n>be used in the buildfarm.\n\n30s is the default value of the streaming replay conflict timeout. After that the startup process cancelled the session running pg_dump. So it's not an intentional timeout in the test.\n\nIt's not surprising that pg_dump takes 30s on that old a machine. But more than 2min still surprised me. Is that really do be expected?\n\n- Andres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Tue, 01 Feb 2022 18:16:19 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> It's not surprising that pg_dump takes 30s on that old a machine. But more than 2min still surprised me. Is that really do be expected?\n\nIn the previous buildfarm run, that dump took just under 31s:\n\n2022-01-31 14:21:10.358 EST [19325:1] [unknown] LOG: connection received: host=[local]\n2022-01-31 14:21:10.367 EST [19325:2] [unknown] LOG: connection authorized: user=buildfarm database=regression application_name=027_stream_regress.pl\n...\n2022-01-31 14:21:41.139 EST [19325:2663] 027_stream_regress.pl LOG: disconnection: session time: 0:00:30.782 user=buildfarm database=regression host=[local]\n\nIn the failing run, we have:\n\n2022-02-01 04:48:37.757 EST [3683:1] [unknown] LOG: connection received: host=[local]\n2022-02-01 04:48:37.767 EST [3683:2] [unknown] LOG: connection authorized: user=buildfarm database=regression application_name=027_stream_regress.pl\n...\n2022-02-01 04:49:09.719 EST [3683:2584] 027_stream_regress.pl LOG: statement: COPY public.tenk1 (unique1, unique2, two, four, ten, twenty, hundred, thousand, twothousand, fivethous, tenthous, odd, even, stringu1, stringu2, string4) TO stdout;\n2022-02-01 04:49:09.881 EST [3683:2585] 027_stream_regress.pl ERROR: canceling statement due to conflict with recovery\n2022-02-01 04:49:09.881 EST [3683:2586] 027_stream_regress.pl DETAIL: User query might have needed to see row versions that must be removed.\n2022-02-01 04:49:09.881 EST [3683:2587] 027_stream_regress.pl STATEMENT: COPY public.tenk1 (unique1, unique2, two, four, ten, twenty, hundred, thousand, twothousand, fivethous, tenthous, odd, even, stringu1, stringu2, string4) TO stdout;\n2022-02-01 04:49:09.889 EST [3685:1] [unknown] LOG: connection received: host=[local]\n2022-02-01 04:49:09.905 EST [3683:2588] 027_stream_regress.pl LOG: could not send data to client: Broken pipe\n2022-02-01 04:49:09.905 EST [3683:2589] 027_stream_regress.pl ERROR: canceling statement due to conflict with recovery\n2022-02-01 04:49:09.905 EST [3683:2590] 027_stream_regress.pl DETAIL: User query might have needed to see row versions that must be removed.\n2022-02-01 04:49:09.906 EST [3683:2591] 027_stream_regress.pl FATAL: connection to client lost\n2022-02-01 04:49:09.935 EST [3683:2592] 027_stream_regress.pl LOG: disconnection: session time: 0:00:32.179 user=buildfarm database=regression host=[local]\n\nThat's only a little over 30s. Where are you getting 2m from?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 01 Feb 2022 21:33:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Seen again today on prairiedog. Erm, scratch that idea, HS feedback\n> interferes with test results. I guess max_standby_streaming_delay\n> should be increased to 'forever', like in the attached, since pg_dump\n> runs for a very long time on prairiedog:\n\nFWIW, I'd vote for keeping a finite timeout, but making it say\nten minutes. If the thing gets stuck for some reason, you don't\nreally want the test waiting forever. (Some buildfarm animals\nhave overall-test-time limits, but I think it's not the default,\nand the behavior when that gets hit is pretty unfriendly anyway\n-- you don't get any report of the run at all.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 01 Feb 2022 21:43:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Wed, Feb 2, 2022 at 3:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Seen again today on prairiedog. Erm, scratch that idea, HS feedback\n> > interferes with test results. I guess max_standby_streaming_delay\n> > should be increased to 'forever', like in the attached, since pg_dump\n> > runs for a very long time on prairiedog:\n>\n> FWIW, I'd vote for keeping a finite timeout, but making it say\n> ten minutes. If the thing gets stuck for some reason, you don't\n> really want the test waiting forever. (Some buildfarm animals\n> have overall-test-time limits, but I think it's not the default,\n> and the behavior when that gets hit is pretty unfriendly anyway\n> -- you don't get any report of the run at all.)\n\nOk, I've set it to 10 minutes. Thanks.\n\n\n", "msg_date": "Wed, 2 Feb 2022 16:14:04 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "Another failure under 027_stream_regress.pl:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2022-03-16%2005%3A58%3A05\n\n vacuum ... FAILED 3463 ms\n\nI'll try to come up with the perl needed to see the regression.diffs\nnext time...\n\n\n", "msg_date": "Sun, 20 Mar 2022 17:20:59 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Sun, Mar 20, 2022 at 5:20 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Another failure under 027_stream_regress.pl:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2022-03-16%2005%3A58%3A05\n>\n> vacuum ... FAILED 3463 ms\n>\n> I'll try to come up with the perl needed to see the regression.diffs\n> next time...\n\nHere's my proposed change to achieve that.\n\nHere's an example of where it shows up if it fails (from my\ndeliberately sabotaged CI run\nhttps://cirrus-ci.com/build/6730380228165632 where I was verifying\nthat it also works on Windows):\n\nUnix: https://api.cirrus-ci.com/v1/artifact/task/5421419923243008/log/src/test/recovery/tmp_check/log/regress_log_027_stream_regress\nWindows: https://api.cirrus-ci.com/v1/artifact/task/4717732481466368/log/src/test/recovery/tmp_check/log/regress_log_027_stream_regress", "msg_date": "Sun, 20 Mar 2022 22:36:26 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "\nOn 3/20/22 05:36, Thomas Munro wrote:\n> On Sun, Mar 20, 2022 at 5:20 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> Another failure under 027_stream_regress.pl:\n>>\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2022-03-16%2005%3A58%3A05\n>>\n>> vacuum ... FAILED 3463 ms\n>>\n>> I'll try to come up with the perl needed to see the regression.diffs\n>> next time...\n> Here's my proposed change to achieve that.\n\n\nI think that's OK.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 20 Mar 2022 09:34:50 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Mon, Mar 21, 2022 at 2:34 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> On 3/20/22 05:36, Thomas Munro wrote:\n> > On Sun, Mar 20, 2022 at 5:20 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> >> I'll try to come up with the perl needed to see the regression.diffs\n> >> next time...\n> > Here's my proposed change to achieve that.\n>\n> I think that's OK.\n\nThanks for looking! Pushed.\n\n\n", "msg_date": "Mon, 21 Mar 2022 09:44:22 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "i,\n\nOn Mon, Mar 21, 2022 at 5:45 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Mon, Mar 21, 2022 at 2:34 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> > On 3/20/22 05:36, Thomas Munro wrote:\n> > > On Sun, Mar 20, 2022 at 5:20 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > >> I'll try to come up with the perl needed to see the regression.diffs\n> > >> next time...\n> > > Here's my proposed change to achieve that.\n> >\n> > I think that's OK.\n>\n> Thanks for looking! Pushed.\n\nFYI idiacanthus failed 027_stream_regress.pl:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=idiacanthus&dt=2022-03-22%2001%3A58%3A04\n\nThe log shows:\n\n=== dumping /home/bf/build/buildfarm-idiacanthus/HEAD/pgsql.build/src/test/recovery/tmp_check/regression.diffs\n===\ndiff -U3 /home/bf/build/buildfarm-idiacanthus/HEAD/pgsql/src/test/regress/expected/vacuum.out\n/home/bf/build/buildfarm-idiacanthus/HEAD/pgsql.build/src/test/recovery/tmp_check/results/vacuum.out\n--- /home/bf/build/buildfarm-idiacanthus/HEAD/pgsql/src/test/regress/expected/vacuum.out\n2021-07-01 19:00:01.936659446 +0200\n+++ /home/bf/build/buildfarm-idiacanthus/HEAD/pgsql.build/src/test/recovery/tmp_check/results/vacuum.out\n2022-03-22 03:28:09.813377179 +0100\n@@ -181,7 +181,7 @@\n SELECT pg_relation_size('vac_truncate_test') = 0;\n ?column?\n ----------\n- t\n+ f\n (1 row)\n\n VACUUM (TRUNCATE FALSE, FULL TRUE) vac_truncate_test;\n=== EOF ===\nnot ok 2 - regression tests pass\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 22 Mar 2022 12:31:08 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Tue, Mar 22, 2022 at 4:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> SELECT pg_relation_size('vac_truncate_test') = 0;\n> ?column?\n> ----------\n> - t\n> + f\n\nThanks. Ahh, déjà vu... this probably needs the same treatment as\nb700f96c and 3414099c provided for the reloptions test. Well, at\nleast the first one. Here's a patch like that.", "msg_date": "Tue, 22 Mar 2022 16:58:49 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Mon, Mar 21, 2022 at 8:59 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Thanks. Ahh, déjà vu... this probably needs the same treatment as\n> b700f96c and 3414099c provided for the reloptions test. Well, at\n> least the first one. Here's a patch like that.\n\nIf you want to know whether or not the buildfarm will have problems\ndue to VACUUM failing to get a cleanup lock randomly, then I suggest\nthat you use an approach like the one from my patch here:\n\nhttps://postgr.es/m/CAH2-WzkiB-qcsBmWrpzP0nxvrQExoUts1d7TYShg_DrkOHeg4Q@mail.gmail.com\n\nI recently tried it again myself. With the gizmo in place the tests\nfail in exactly the same way you've had problems with on the\nbuildfarm. On the first try, even.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 24 Mar 2022 20:02:36 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Fri, Mar 25, 2022 at 4:03 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> If you want to know whether or not the buildfarm will have problems\n> due to VACUUM failing to get a cleanup lock randomly, then I suggest\n> that you use an approach like the one from my patch here:\n>\n> https://postgr.es/m/CAH2-WzkiB-qcsBmWrpzP0nxvrQExoUts1d7TYShg_DrkOHeg4Q@mail.gmail.com\n>\n> I recently tried it again myself. With the gizmo in place the tests\n> fail in exactly the same way you've had problems with on the\n> buildfarm. On the first try, even.\n\nInteresting. IIUC your chaos gizmo shows that particular vacuum test\nstill failing on master, but that wouldn't happen in real life because\nsince 383f2221 it's a temp table. Your gizmo should probably detect\ntemp rels, as your comment says. I was sort of thinking that perhaps\nif DISABLE_PAGE_SKIPPING is eventually made to do what its name sounds\nlike it does, we could remove TEMP from that test and it'd still pass\nwith the gizmo...\n\n\n", "msg_date": "Fri, 25 Mar 2022 16:55:27 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Thu, Mar 24, 2022 at 8:56 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Interesting. IIUC your chaos gizmo shows that particular vacuum test\n> still failing on master, but that wouldn't happen in real life because\n> since 383f2221 it's a temp table. Your gizmo should probably detect\n> temp rels, as your comment says. I was sort of thinking that perhaps\n> if DISABLE_PAGE_SKIPPING is eventually made to do what its name sounds\n> like it does, we could remove TEMP from that test and it'd still pass\n> with the gizmo...\n\nWhy not just use VACUUM FREEZE? That should work, because it won't\nsettle for a cleanup lock on any page with an XID < OldestXmin. And\neven if there were only LP_DEAD items on a page, that wouldn't matter\neither, because we don't need a cleanup lock to get rid of those\nanymore. And we consistently do all the same steps for rel truncation\nin the no-cleanup-lock path (lazy_scan_noprune) now.\n\nI think that DISABLE_PAGE_SKIPPING isn't appropriate for this kind of\nthing. It mostly just makes VACUUM not trust the visibility map, which\nisn't going to help. While DISABLE_PAGE_SKIPPING also forces\naggressive mode, that isn't going to help either, unless you somehow\nalso make sure that FreezeLimit is OldestXmin (e.g. by setting\nvacuum_freeze_min_age to 0).\n\nVACUUM FREEZE (without DISABLE_PAGE_SKIPPING) seems like it would do\neverything you want, without using a temp table. At least on the\nmaster branch.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 24 Mar 2022 21:06:21 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "Hi,\n\nOn 2022-03-24 21:06:21 -0700, Peter Geoghegan wrote:\n> On Thu, Mar 24, 2022 at 8:56 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Interesting. IIUC your chaos gizmo shows that particular vacuum test\n> > still failing on master, but that wouldn't happen in real life because\n> > since 383f2221 it's a temp table. Your gizmo should probably detect\n> > temp rels, as your comment says. I was sort of thinking that perhaps\n> > if DISABLE_PAGE_SKIPPING is eventually made to do what its name sounds\n> > like it does, we could remove TEMP from that test and it'd still pass\n> > with the gizmo...\n> \n> Why not just use VACUUM FREEZE? That should work, because it won't\n> settle for a cleanup lock on any page with an XID < OldestXmin. And\n> even if there were only LP_DEAD items on a page, that wouldn't matter\n> either, because we don't need a cleanup lock to get rid of those\n> anymore. And we consistently do all the same steps for rel truncation\n> in the no-cleanup-lock path (lazy_scan_noprune) now.\n> \n> I think that DISABLE_PAGE_SKIPPING isn't appropriate for this kind of\n> thing. It mostly just makes VACUUM not trust the visibility map, which\n> isn't going to help. While DISABLE_PAGE_SKIPPING also forces\n> aggressive mode, that isn't going to help either, unless you somehow\n> also make sure that FreezeLimit is OldestXmin (e.g. by setting\n> vacuum_freeze_min_age to 0).\n> \n> VACUUM FREEZE (without DISABLE_PAGE_SKIPPING) seems like it would do\n> everything you want, without using a temp table. At least on the\n> master branch.\n\nWe tried that in a prior case:\nhttps://postgr.es/m/20220120052404.sonrhq3f3qgplpzj%40alap3.anarazel.de\n\nI don't know if the same danger applies here though.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 24 Mar 2022 21:16:07 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Fri, Mar 25, 2022 at 5:16 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-03-24 21:06:21 -0700, Peter Geoghegan wrote:\n> > VACUUM FREEZE (without DISABLE_PAGE_SKIPPING) seems like it would do\n> > everything you want, without using a temp table. At least on the\n> > master branch.\n>\n> We tried that in a prior case:\n> https://postgr.es/m/20220120052404.sonrhq3f3qgplpzj%40alap3.anarazel.de\n\nYeah, or really, it was Michael that tried that in commit fe246d1c,\nand then we tried more things with 3414099c and b700f96c. It's a bit\nof a belt-and-braces setup admittedly...\n\n\n", "msg_date": "Fri, 25 Mar 2022 17:25:46 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Thu, Mar 24, 2022 at 9:16 PM Andres Freund <andres@anarazel.de> wrote:\n> > VACUUM FREEZE (without DISABLE_PAGE_SKIPPING) seems like it would do\n> > everything you want, without using a temp table. At least on the\n> > master branch.\n>\n> We tried that in a prior case:\n> https://postgr.es/m/20220120052404.sonrhq3f3qgplpzj%40alap3.anarazel.de\n\nOh, yeah. If some other backend is holding back OldestXmin, and you\ncan't find a way of dealing with that, then you'll need a temp table.\n(Mind you, that trick only works on recent versions too.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 24 Mar 2022 21:26:46 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "cfbot found another source of nondeterminism in the regression tests,\ndue to the smaller shared_buffers used in this TAP test:\n\nhttps://cirrus-ci.com/task/4611828654276608\nhttps://api.cirrus-ci.com/v1/artifact/task/4611828654276608/log/src/test/recovery/tmp_check/regression.diffs\n\nTurned out that we had already diagnosed that once before, when tiny\nbuild farm animal chipmunk reported the same, but we didn't commit a\nfix:\n\nhttps://www.postgresql.org/message-id/flat/CA%2BhUKGLTK6ZuEkpeJ05-MEmvmgZveCh%2B_w013m7%2ByKWFSmRcDA%40mail.gmail.com\n\n\n", "msg_date": "Mon, 28 Mar 2022 16:11:24 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> cfbot found another source of nondeterminism in the regression tests,\n> due to the smaller shared_buffers used in this TAP test:\n\nThis failure seems related but not identical:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=myna&dt=2022-04-02%2004%3A00%3A26\n\nportals.out is expecting that the \"foo25ns\" cursor will read\nstarting at the beginning of tenk1, but it's starting somewhere\nelse, which presumably is a syncscan effect.\n\nI think the fundamental instability here is that this TAP test is\nsetting shared_buffers small enough to allow the syncscan logic\nto kick in where it does not in normal testing. Maybe we should\njust disable syncscan in this test script?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Apr 2022 01:10:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "I wrote:\n> I think the fundamental instability here is that this TAP test is\n> setting shared_buffers small enough to allow the syncscan logic\n> to kick in where it does not in normal testing. Maybe we should\n> just disable syncscan in this test script?\n\nDid that, we'll see how much it helps.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 04 Apr 2022 12:39:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Thu, Dec 09, 2021 at 12:10:23PM +1300, Thomas Munro wrote:\n> This adds 2 whole minutes to the recovery check, when running with the\n> Windows serial-only scripts on Cirrus CI (using Andres's CI patches).\n> For Linux it adds ~20 seconds to the total of -j8 check-world.\n> Hopefully that's time well spent, because it adds test coverage for\n> all the redo routines, and hopefully soon we won't have to run 'em in\n> series on Windows.\n\nShould 027-stream-regress be renamed to something that starts earlier ?\nOff-list earlier this year, Andres referred to 000.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 3 Aug 2022 10:30:50 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Thu, Aug 4, 2022 at 3:30 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Thu, Dec 09, 2021 at 12:10:23PM +1300, Thomas Munro wrote:\n> > This adds 2 whole minutes to the recovery check, when running with the\n> > Windows serial-only scripts on Cirrus CI (using Andres's CI patches).\n> > For Linux it adds ~20 seconds to the total of -j8 check-world.\n> > Hopefully that's time well spent, because it adds test coverage for\n> > all the redo routines, and hopefully soon we won't have to run 'em in\n> > series on Windows.\n>\n> Should 027-stream-regress be renamed to something that starts earlier ?\n> Off-list earlier this year, Andres referred to 000.\n\nDo you have any data on improved times from doing that?\n\nI have wondered about moving it into 001_stream_rep.pl.\n\n\n", "msg_date": "Thu, 4 Aug 2022 09:24:24 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A test for replay of regression tests" }, { "msg_contents": "On Thu, Aug 04, 2022 at 09:24:24AM +1200, Thomas Munro wrote:\n> On Thu, Aug 4, 2022 at 3:30 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Thu, Dec 09, 2021 at 12:10:23PM +1300, Thomas Munro wrote:\n> > > This adds 2 whole minutes to the recovery check, when running with the\n> > > Windows serial-only scripts on Cirrus CI (using Andres's CI patches).\n> > > For Linux it adds ~20 seconds to the total of -j8 check-world.\n> > > Hopefully that's time well spent, because it adds test coverage for\n> > > all the redo routines, and hopefully soon we won't have to run 'em in\n> > > series on Windows.\n> >\n> > Should 027-stream-regress be renamed to something that starts earlier ?\n> > Off-list earlier this year, Andres referred to 000.\n\nSee also: \nhttps://www.postgresql.org/message-id/20220213220709.vjz5rziuhfdpqxrg@alap3.anarazel.de\n\n> Do you have any data on improved times from doing that?\n> \n> I have wondered about moving it into 001_stream_rep.pl.\n\nThe immediate motive for raising the question is due to working on your cygwin\npatch (where I've set PROVE_FLAGS=-j3). The last invocation I have opened ends\nlike:\n\n[20:46:47.577] [13:46:47] t/026_overwrite_contrecord.pl ........ ok 10264 ms ( 0.02 usr 0.02 sys + 11.25 cusr 35.95 csys = 47.24 CPU)\n[20:47:08.087] [13:47:08] t/028_pitr_timelines.pl .............. ok 13153 ms ( 0.00 usr 0.00 sys + 4.03 cusr 14.79 csys = 18.82 CPU)\n[20:47:08.999] [13:47:09] t/029_stats_restart.pl ............... ok 12631 ms ( 0.00 usr 0.02 sys + 7.40 cusr 23.30 csys = 30.71 CPU)\n[20:47:34.353] [13:47:34] t/031_recovery_conflict.pl ........... ok 11337 ms ( 0.00 usr 0.00 sys + 3.84 cusr 11.82 csys = 15.66 CPU)\n[20:47:35.070] [13:47:35] t/030_stats_cleanup_replica.pl ....... ok 14054 ms ( 0.02 usr 0.00 sys + 7.64 cusr 25.02 csys = 32.68 CPU)\n[20:48:04.887] [13:48:04] t/032_relfilenode_reuse.pl ........... ok 12755 ms ( 0.00 usr 0.00 sys + 3.36 cusr 11.57 csys = 14.93 CPU)\n[20:48:42.055] [13:48:42] t/033_replay_tsp_drops.pl ............ ok 43529 ms ( 0.00 usr 0.00 sys + 12.29 cusr 41.43 csys = 53.71 CPU)\n[20:50:02.770] [13:50:02] t/027_stream_regress.pl .............. ok 198408 ms ( 0.02 usr 0.06 sys + 44.92 cusr 142.42 csys = 187.42 CPU)\n[20:50:02.771] [13:50:02]\n[20:50:02.771] All tests successful.\n[20:50:02.771] Files=33, Tests=411, 402 wallclock secs ( 0.16 usr 0.27 sys + 138.03 cusr 441.56 csys = 580.01 CPU)\n\nIf 027 had been started sooner, this test might have finished up to 78sec\nearlier. If lots of tests are added in the future, maybe it won't matter, but\nit seems like it does now.\n\nAs I understand, checks are usually parallelized by \"make -j\" and not by\n\"prove\". In that case, starting a slow test later doesn't matter. But it'd be\nbetter for anyone who runs tap tests manually, and (I think) for meson.\n\nAs a one-off test on localhost:\ntime make check -C src/test/recovery\n=> 11m42,790s\ntime make check -C src/test/recovery PROVE_FLAGS=-j2\n=> 7m56,315s\n\nAfter renaming it to 001:\ntime make check -C src/test/recovery\n=> 11m33,887s (~same)\ntime make check -C src/test/recovery PROVE_FLAGS=-j2\n=> 6m59,969s\n\nI don't know how it affect the buildfarm (but I think that's not optimized\nprimarily for speed anyway).\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 3 Aug 2022 20:13:00 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: A test for replay of regression tests" } ]
[ { "msg_contents": "Hi, Hackers:\r\n\r\nIn function ExecGetTriggerResultRel, we can see comments:\r\n\r\n> /* First, search through the query result relations */ ...\r\n> /*\r\n> * Third, search through the result relations that were created during\r\n> * tuple routing, if any.\r\n> */\r\n\r\nBut the 'Second' was deleted since commit 1375422c78.\r\n\r\nUpdate the 'Third' to 'Second', please see the attachment.\r\n\r\nThoughts?\r\n\r\nBest wishes\r\nYukun Wang", "msg_date": "Fri, 23 Apr 2021 06:42:15 +0000", "msg_from": "\"wangyukun@fujitsu.com\" <wangyukun@fujitsu.com>", "msg_from_op": true, "msg_subject": "fix a comment" }, { "msg_contents": "On Fri, Apr 23, 2021 at 12:12 PM wangyukun@fujitsu.com\n<wangyukun@fujitsu.com> wrote:\n>\n> Hi, Hackers:\n>\n> In function ExecGetTriggerResultRel, we can see comments:\n>\n> > /* First, search through the query result relations */ ...\n> > /*\n> > * Third, search through the result relations that were created during\n> > * tuple routing, if any.\n> > */\n>\n> But the 'Second' was deleted since commit 1375422c78.\n>\n> Update the 'Third' to 'Second', please see the attachment.\n>\n> Thoughts?\n>\n\nWell yes, looks good to me.\n\nHow about simply removing these numbering?\n\nRegards,\nAmul\n\n\n", "msg_date": "Fri, 23 Apr 2021 12:21:07 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fix a comment" }, { "msg_contents": "Hi, Amul\r\n\r\nThank you for reviewing.\r\n\r\n> How about simply removing these numbering?\r\n\r\nAgree. Please see the v2 patch which delete the number in comment.\r\n\r\nBest wishes\r\nYukun Wang\r\n\r\n-----Original Message-----\r\nFrom: Amul Sul <sulamul@gmail.com> \r\nSent: Friday, April 23, 2021 3:51 PM\r\nTo: Wang, Yukun/王 俞坤 <wangyukun@fujitsu.com>\r\nCc: pgsql-hackers@postgresql.org\r\nSubject: Re: fix a comment\r\n\r\nOn Fri, Apr 23, 2021 at 12:12 PM wangyukun@fujitsu.com <wangyukun@fujitsu.com> wrote:\r\n>\r\n> Hi, Hackers:\r\n>\r\n> In function ExecGetTriggerResultRel, we can see comments:\r\n>\r\n> > /* First, search through the query result relations */ ...\r\n> > /*\r\n> > * Third, search through the result relations that were created \r\n> > during\r\n> > * tuple routing, if any.\r\n> > */\r\n>\r\n> But the 'Second' was deleted since commit 1375422c78.\r\n>\r\n> Update the 'Third' to 'Second', please see the attachment.\r\n>\r\n> Thoughts?\r\n>\r\n\r\nWell yes, looks good to me.\r\n\r\nHow about simply removing these numbering?\r\n\r\nRegards,\r\nAmul", "msg_date": "Fri, 23 Apr 2021 07:03:40 +0000", "msg_from": "\"wangyukun@fujitsu.com\" <wangyukun@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: fix a comment" }, { "msg_contents": "On Fri, Apr 23, 2021 at 07:03:40AM +0000, wangyukun@fujitsu.com wrote:\n> Agree. Please see the v2 patch which delete the number in comment.\n\nIndeed, this set of comments became a bit obsolete after 1375422, as\nyou saied upthread. This simplification looks fine to me, so\napplied. I am in a mood for such patches since yesterday..\n--\nMichael", "msg_date": "Sat, 24 Apr 2021 15:13:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: fix a comment" }, { "msg_contents": "On Sat, Apr 24, 2021 at 11:43 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Apr 23, 2021 at 07:03:40AM +0000, wangyukun@fujitsu.com wrote:\n> > Agree. Please see the v2 patch which delete the number in comment.\n>\n> Indeed, this set of comments became a bit obsolete after 1375422, as\n> you saied upthread. This simplification looks fine to me, so\n> applied. I am in a mood for such patches since yesterday..\n\n:)\n\nThank you !\n\nRegards,\nAmul\n\n\n", "msg_date": "Sat, 24 Apr 2021 11:46:05 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fix a comment" } ]
[ { "msg_contents": "Hi,\n\nI'm trying to test Postgres code for any unaligned memory accesses. I\nused a hack shown at [1] and put it in exec_simple_query, then I'm\nseeing a SIGBUS error from SplitIdentifierString's strncpy, see [2].\nIt looks like the SIGBUS error occurs even if a simple memcpy(for\ntesting purpose) is done in recomputeNamespacePath or\nSplitIdentifierString.\n\nI'm not sure this is the right way. I would like to know whether there\nis a standard way of testing Postgres code for any unaligned memory\naccesses. Thanks. Any help would be appreciated.\n\n[1] - https://www.programmersought.com/article/17701994124/\n+/* Enable Alignment Checking */\n+#if defined(__GNUC__)\n+# if defined(__i386__)\n+ /* Enable Alignment Checking on x86 */\n+ __asm__(\"pushf\\norl $0x40000,(%esp)\\npopf\");\n+# elif defined(__x86_64__)\n+ /* Enable Alignment Checking on x86_64 */\n+ __asm__(\"pushf\\norl $0x40000,(%rsp)\\npopf\");\n+# endif\n+#endif\n\n[2]\nProgram received signal SIGBUS, Bus error.\n0x00007f5067188d36 in __strncpy_sse2_unaligned () from /lib64/libc.so.6\n(gdb) bt\n#0 0x00007f5067188d36 in __strncpy_sse2_unaligned () from /lib64/libc.so.6\n#1 0x0000000000ada740 in SplitIdentifierString (rawstring=0x1146620 \"\\\"$user\",\n separator=44 ',', namelist=0x7ffcdf1911d0) at varlena.c:3817\n#2 0x00000000005d203b in recomputeNamespacePath () at namespace.c:3761\n#3 0x00000000005cde11 in FuncnameGetCandidates (names=0x1145e08,\nnargs=2, argnames=0x0,\n expand_variadic=true, expand_defaults=true, missing_ok=false) at\nnamespace.c:971\n#4 0x0000000000647dcb in func_get_detail (funcname=0x1145e08, fargs=0x1146570,\n fargnames=0x0, nargs=2, argtypes=0x7ffcdf191540, expand_variadic=true,\n expand_defaults=true, funcid=0x7ffcdf1916d8, rettype=0x7ffcdf1916dc,\n retset=0x7ffcdf19152f, nvargs=0x7ffcdf191528, vatype=0x7ffcdf191524,\n true_typeids=0x7ffcdf191538, argdefaults=0x7ffcdf191530) at\nparse_func.c:1421\n#5 0x0000000000645961 in ParseFuncOrColumn (pstate=0x11462e8,\nfuncname=0x1145e08,\n fargs=0x1146570, last_srf=0x0, fn=0x1145f28, proc_call=false, location=14)\n at parse_func.c:265\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 23 Apr 2021 15:51:10 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "How to test Postgres for any unaligned memory accesses?" }, { "msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> I'm trying to test Postgres code for any unaligned memory accesses. I\n> used a hack shown at [1] and put it in exec_simple_query, then I'm\n> seeing a SIGBUS error from SplitIdentifierString's strncpy, see [2].\n\nRegardless of Postgres' policy about alignment safety, glibc sees\nno reason to avoid unaligned accesses on x86 hardware. If you want\nto test this sort of thing on hardware that's not actually alignment\npicky, you have to enlist the toolchain's help.\n\n> I'm not sure this is the right way. I would like to know whether there\n> is a standard way of testing Postgres code for any unaligned memory\n> accesses. Thanks. Any help would be appreciated.\n\nPer c.h, late-model compilers have options for this:\n\n * Testing can be done with \"-fsanitize=alignment -fsanitize-trap=alignment\"\n * on clang, or \"-fsanitize=alignment -fno-sanitize-recover=alignment\" on gcc.\n\nWe have at least one buildfarm member using the former. I have no idea\nhow water-tight these checks are though. They don't seem to cause very\nmuch slowdown, which is suspicious :-(\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Apr 2021 09:55:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: How to test Postgres for any unaligned memory accesses?" }, { "msg_contents": "On Fri, Apr 23, 2021 at 7:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I'm not sure this is the right way. I would like to know whether there\n> > is a standard way of testing Postgres code for any unaligned memory\n> > accesses. Thanks. Any help would be appreciated.\n>\n> Per c.h, late-model compilers have options for this:\n>\n> * Testing can be done with \"-fsanitize=alignment -fsanitize-trap=alignment\"\n> * on clang, or \"-fsanitize=alignment -fno-sanitize-recover=alignment\" on gcc.\n\nThanks Tom!\n\nI used the above gcc compiler flags to see if they catch memory\nalignment issues. The way I tested on my dev system (x86_64 platform\nwith Ubuntu OS) was that I commented out max aligning specialSize in\nPageInit, compiled the source code with and without the alignment\nflags. make check failed with the alignment checking flags, it passed\nwithout the flags.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 26 Apr 2021 19:32:01 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: How to test Postgres for any unaligned memory accesses?" } ]
[ { "msg_contents": "More fixes like the one Peter committed as 9bd563aa9.\nI eyeballed the HTML to make sure this looks right.\n\n From a8b782cde7c5d6eef1e3876636feb652bc5f3711 Mon Sep 17 00:00:00 2001\nFrom: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Thu, 22 Apr 2021 21:10:49 -0500\nSubject: [PATCH] Remove extraneous whitespace in tags\n\ngit grep -E '<([^>]*)>[^<]* </\\1>' doc/src/sgml |grep -Evw 'optional|xsl|lineannotation|entry|prompt|computeroutput'\n---\n doc/src/sgml/maintenance.sgml | 2 +-\n doc/src/sgml/mvcc.sgml | 2 +-\n doc/src/sgml/pgcrypto.sgml | 2 +-\n doc/src/sgml/ref/pg_rewind.sgml | 2 +-\n doc/src/sgml/runtime.sgml | 2 +-\n 5 files changed, 5 insertions(+), 5 deletions(-)\n\ndiff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml\nindex 4adb34a21b..ee6113926a 100644\n--- a/doc/src/sgml/maintenance.sgml\n+++ b/doc/src/sgml/maintenance.sgml\n@@ -719,7 +719,7 @@ HINT: Stop the postmaster and vacuum that database in single-user mode.\n <productname>PostgreSQL</productname> has an optional but highly\n recommended feature called <firstterm>autovacuum</firstterm>,\n whose purpose is to automate the execution of\n- <command>VACUUM</command> and <command>ANALYZE </command> commands.\n+ <command>VACUUM</command> and <command>ANALYZE</command> commands.\n When enabled, autovacuum checks for\n tables that have had a large number of inserted, updated or deleted\n tuples. These checks use the statistics collection facility;\ndiff --git a/doc/src/sgml/mvcc.sgml b/doc/src/sgml/mvcc.sgml\nindex b46cba8158..6cb9c63161 100644\n--- a/doc/src/sgml/mvcc.sgml\n+++ b/doc/src/sgml/mvcc.sgml\n@@ -1074,7 +1074,7 @@ ERROR: could not serialize access due to read/write dependencies among transact\n \n \n <table tocentry=\"1\" id=\"table-lock-compatibility\">\n- <title> Conflicting Lock Modes</title>\n+ <title>Conflicting Lock Modes</title>\n <tgroup cols=\"9\">\n <colspec colnum=\"1\" colwidth=\"1.25*\"/>\n <colspec colnum=\"2\" colwidth=\"1*\" colname=\"lockst\"/>\ndiff --git a/doc/src/sgml/pgcrypto.sgml b/doc/src/sgml/pgcrypto.sgml\nindex b6bb23de0f..13770dfc6f 100644\n--- a/doc/src/sgml/pgcrypto.sgml\n+++ b/doc/src/sgml/pgcrypto.sgml\n@@ -1410,7 +1410,7 @@ gen_random_uuid() returns uuid\n <entry>KAME kame/sys/crypto</entry>\n </row>\n <row>\n- <entry>SHA256/384/512 </entry>\n+ <entry>SHA256/384/512</entry>\n <entry>Aaron D. Gifford</entry>\n <entry>OpenBSD sys/crypto</entry>\n </row>\ndiff --git a/doc/src/sgml/ref/pg_rewind.sgml b/doc/src/sgml/ref/pg_rewind.sgml\nindex 07aae75d8b..33e6bb64ad 100644\n--- a/doc/src/sgml/ref/pg_rewind.sgml\n+++ b/doc/src/sgml/ref/pg_rewind.sgml\n@@ -25,7 +25,7 @@ PostgreSQL documentation\n <arg rep=\"repeat\"><replaceable>option</replaceable></arg>\n <group choice=\"plain\">\n <group choice=\"req\">\n- <arg choice=\"plain\"><option>-D </option></arg>\n+ <arg choice=\"plain\"><option>-D</option></arg>\n <arg choice=\"plain\"><option>--target-pgdata</option></arg>\n </group>\n <replaceable> directory</replaceable>\ndiff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml\nindex 001d195b8e..f1cbc1d9e9 100644\n--- a/doc/src/sgml/runtime.sgml\n+++ b/doc/src/sgml/runtime.sgml\n@@ -2258,7 +2258,7 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433\n The certificates of <quote>intermediate</quote> certificate authorities\n can also be appended to the file. Doing this avoids the necessity of\n storing intermediate certificates on clients, assuming the root and\n- intermediate certificates were created with <literal>v3_ca </literal>\n+ intermediate certificates were created with <literal>v3_ca</literal>\n extensions. (This sets the certificate's basic constraint of\n <literal>CA</literal> to <literal>true</literal>.)\n This allows easier expiration of intermediate certificates.\n-- \n2.17.0\n\n\n\n", "msg_date": "Fri, 23 Apr 2021 13:43:38 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "[PATCH] Remove extraneous whitespace in tags: > foo< and >bar <" }, { "msg_contents": "On Fri, Apr 23, 2021 at 01:43:38PM -0500, Justin Pryzby wrote:\n> More fixes like the one Peter committed as 9bd563aa9.\n> I eyeballed the HTML to make sure this looks right.\n\n- <title> Conflicting Lock Modes</title>\n+ <title>Conflicting Lock Modes</title>\nThat's a nice regex-fu here to detect cases like this one.\n\nThanks, applied.\n--\nMichael", "msg_date": "Sat, 24 Apr 2021 10:47:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Remove extraneous whitespace in tags: > foo< and >bar <" } ]
[ { "msg_contents": "Hi,\n\nI started to write a test for $Subject, which I think we sorely need.\n\nCurrently my approach is to:\n- start a cluster, create a few tables with test data\n- acquire SHARE UPDATE EXCLUSIVE in a prepared transaction, to prevent\n autovacuum from doing anything\n- cause dead tuples to exist\n- restart\n- run pg_resetwal -x 2000027648\n- do things like acquiring pins on pages that block vacuum from progressing\n- commit prepared transaction\n- wait for template0, template1 datfrozenxid to increase\n- wait for relfrozenxid for most relations in postgres to increase\n- release buffer pin\n- wait for postgres datfrozenxid to increase\n\nSo far so good. But I've encountered a few things that stand in the way of\nenabling such a test by default:\n\n1) During startup StartupSUBTRANS() zeroes out all pages between\n oldestActiveXID and nextXid. That takes 8s on my workstation, but only\n because I have plenty memory - pg_subtrans ends up 14GB as I currently do\n the test. Clearly not something we could do on the BF.\n\n2) FAILSAFE_MIN_PAGES is 4GB - which seems to make it infeasible to test the\n failsafe mode, we can't really create 4GB relations on the BF. While\n writing the tests I've lowered this to 4MB...\n\n3) pg_resetwal -x requires to carefully choose an xid: It needs to be the\n first xid on a clog page. It's not hard to determine which xids are but it\n depends on BLCKSZ and a few constants in clog.c. I've for now hardcoded a\n value appropriate for 8KB, but ...\n\n\nI have 2 1/2 ideas about addressing 1);\n\n- We could exposing functionality to do advance nextXid to a future value at\n runtime, without filling in clog/subtrans pages. Would probably have to live\n in varsup.c and be exposed via regress.so or such?\n\n- The only reason StartupSUBTRANS() does that work is because of the prepared\n transaction holding back oldestActiveXID. That transaction in turn exists to\n prevent autovacuum from doing anything before we do test setup\n steps.\n\n Perhaps it'd be sufficient to set autovacuum_naptime really high initially,\n perform the test setup, set naptime to something lower, reload config. But\n I'm worried that might not be reliable: If something ends up allocating an\n xid we'd potentially reach the path in GetNewTransaction() that wakes up the\n launcher? But probably there wouldn't be anything doing so?\n\n Another aspect that might not make this a good choice is that it actually\n seems relevant to be able to test cases where there are very old still\n running transactions...\n\n- As a variant of the previous idea: If that turns out to be unreliable, we\n could instead set nextxid, start in single user mode, create a blocking 2PC\n transaction, start normally. Because there's no old active xid we'd not run\n into the StartupSUBTRANS problem.\n\n\nFor 2), I don't really have a better idea than making that configurable\nsomehow?\n\n3) is probably tolerable for now, we could skip the test if BLCKSZ isn't 8KB,\nor we could hardcode the calculation for different block sizes.\n\n\n\nI noticed one minor bug that's likely new:\n\n2021-04-23 13:32:30.899 PDT [2027738] LOG: automatic aggressive vacuum to prevent wraparound of table \"postgres.public.small_trunc\": index scans: 1\n pages: 400 removed, 28 remain, 0 skipped due to pins, 0 skipped frozen\n tuples: 14000 removed, 1000 remain, 0 are dead but not yet removable, oldest xmin: 2000027651\n buffer usage: 735 hits, 1262 misses, 874 dirtied\n index scan needed: 401 pages from table (1432.14% of total) had 14000 dead item identifiers removed\n index \"small_trunc_pkey\": pages: 43 in total, 37 newly deleted, 37 currently deleted, 0 reusable\n avg read rate: 559.048 MB/s, avg write rate: 387.170 MB/s\n system usage: CPU: user: 0.01 s, system: 0.00 s, elapsed: 0.01 s\n WAL usage: 1809 records, 474 full page images, 3977538 bytes\n\n'1432.14% of total' - looks like removed pages need to be added before the\npercentage calculation?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 23 Apr 2021 13:43:06 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Fri, Apr 23, 2021 at 01:43:06PM -0700, Andres Freund wrote:\n> 2) FAILSAFE_MIN_PAGES is 4GB - which seems to make it infeasible to test the\n> failsafe mode, we can't really create 4GB relations on the BF. While\n> writing the tests I've lowered this to 4MB...\n\n> For 2), I don't really have a better idea than making that configurable\n> somehow?\n\nDoes it work to shut down the cluster and create the .0,.1,.2,.3 segments of a\nnew, empty relation with zero blocks using something like truncate -s 1G ?\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 23 Apr 2021 18:08:12 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Fri, Apr 23, 2021 at 1:43 PM Andres Freund <andres@anarazel.de> wrote:\n> I started to write a test for $Subject, which I think we sorely need.\n\n+1\n\n> Currently my approach is to:\n> - start a cluster, create a few tables with test data\n> - acquire SHARE UPDATE EXCLUSIVE in a prepared transaction, to prevent\n> autovacuum from doing anything\n> - cause dead tuples to exist\n> - restart\n> - run pg_resetwal -x 2000027648\n> - do things like acquiring pins on pages that block vacuum from progressing\n> - commit prepared transaction\n> - wait for template0, template1 datfrozenxid to increase\n> - wait for relfrozenxid for most relations in postgres to increase\n> - release buffer pin\n> - wait for postgres datfrozenxid to increase\n\nJust having a standard-ish way to do stress testing like this would\nadd something.\n\n> 2) FAILSAFE_MIN_PAGES is 4GB - which seems to make it infeasible to test the\n> failsafe mode, we can't really create 4GB relations on the BF. While\n> writing the tests I've lowered this to 4MB...\n\nThe only reason that I chose 4GB for FAILSAFE_MIN_PAGES is because the\nrelated VACUUM_FSM_EVERY_PAGES constant was 8GB -- the latter limits\nhow often we'll consider the failsafe in the single-pass/no-indexes\ncase.\n\nI see no reason why it cannot be changed now. VACUUM_FSM_EVERY_PAGES\nalso frustrates FSM testing in the single-pass case in about the same\nway, so maybe that should be considered as well? Note that the FSM\nhandling for the single pass case is actually a bit different to the\ntwo pass/has-indexes case, since the single pass case calls\nlazy_vacuum_heap_page() directly in its first and only pass over the\nheap (that's the whole point of having it of course).\n\n> 3) pg_resetwal -x requires to carefully choose an xid: It needs to be the\n> first xid on a clog page. It's not hard to determine which xids are but it\n> depends on BLCKSZ and a few constants in clog.c. I've for now hardcoded a\n> value appropriate for 8KB, but ...\n\nUgh.\n\n> For 2), I don't really have a better idea than making that configurable\n> somehow?\n\nThat could make sense as a developer/testing option, I suppose. I just\ndoubt that it makes sense as anything else.\n\n> 2021-04-23 13:32:30.899 PDT [2027738] LOG: automatic aggressive vacuum to prevent wraparound of table \"postgres.public.small_trunc\": index scans: 1\n> pages: 400 removed, 28 remain, 0 skipped due to pins, 0 skipped frozen\n> tuples: 14000 removed, 1000 remain, 0 are dead but not yet removable, oldest xmin: 2000027651\n> buffer usage: 735 hits, 1262 misses, 874 dirtied\n> index scan needed: 401 pages from table (1432.14% of total) had 14000 dead item identifiers removed\n> index \"small_trunc_pkey\": pages: 43 in total, 37 newly deleted, 37 currently deleted, 0 reusable\n> avg read rate: 559.048 MB/s, avg write rate: 387.170 MB/s\n> system usage: CPU: user: 0.01 s, system: 0.00 s, elapsed: 0.01 s\n> WAL usage: 1809 records, 474 full page images, 3977538 bytes\n>\n> '1432.14% of total' - looks like removed pages need to be added before the\n> percentage calculation?\n\nClearly this needs to account for removed heap pages in order to\nconsistently express the percentage of pages with LP_DEAD items in\nterms of a percentage of the original table size. I can fix this\nshortly.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 23 Apr 2021 16:12:33 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "Hi,\n\nOn 2021-04-23 18:08:12 -0500, Justin Pryzby wrote:\n> On Fri, Apr 23, 2021 at 01:43:06PM -0700, Andres Freund wrote:\n> > 2) FAILSAFE_MIN_PAGES is 4GB - which seems to make it infeasible to test the\n> > failsafe mode, we can't really create 4GB relations on the BF. While\n> > writing the tests I've lowered this to 4MB...\n> \n> > For 2), I don't really have a better idea than making that configurable\n> > somehow?\n> \n> Does it work to shut down the cluster and create the .0,.1,.2,.3 segments of a\n> new, empty relation with zero blocks using something like truncate -s 1G ?\n\nI'd like this to be portable to at least windows - I don't know how well\nthat deals with sparse files. But the bigger issue is that that IIRC\nwill trigger vacuum to try to initialize all those pages, which will\nthen force all that space to be allocated anyway...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 23 Apr 2021 16:26:17 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "Hi,\n\nOn 2021-04-23 16:12:33 -0700, Peter Geoghegan wrote:\n> The only reason that I chose 4GB for FAILSAFE_MIN_PAGES is because the\n> related VACUUM_FSM_EVERY_PAGES constant was 8GB -- the latter limits\n> how often we'll consider the failsafe in the single-pass/no-indexes\n> case.\n\nI don't really understand why it makes sense to tie FAILSAFE_MIN_PAGES\nand VACUUM_FSM_EVERY_PAGES together? They seem pretty independent to me?\n\n\n\n> I see no reason why it cannot be changed now. VACUUM_FSM_EVERY_PAGES\n> also frustrates FSM testing in the single-pass case in about the same\n> way, so maybe that should be considered as well? Note that the FSM\n> handling for the single pass case is actually a bit different to the\n> two pass/has-indexes case, since the single pass case calls\n> lazy_vacuum_heap_page() directly in its first and only pass over the\n> heap (that's the whole point of having it of course).\n\nI'm not opposed to lowering VACUUM_FSM_EVERY_PAGES (the costs don't seem\nall that high compared to vacuuming?), but I don't think there's as\nclear a need for testing around that as there is around wraparound.\n\n\nThe failsafe mode affects the table scan itself by disabling cost\nlimiting. As far as I can see the ways it triggers for the table scan (vs\ntruncation or index processing) are:\n\n1) Before vacuuming starts, for heap phases and indexes, if already\n necessary at that point\n2) For a table with indexes, before/after each index vacuum, if now\n necessary\n3) On a table without indexes, every 8GB, iff there are dead tuples, if now necessary\n\nWhy would we want to trigger the failsafe mode during a scan of a table\nwith dead tuples and no indexes, but not on a table without dead tuples\nor with indexes but fewer than m_w_m dead tuples? That makes little\nsense to me.\n\n\nIt seems that for the no-index case the warning message is quite off?\n\n\t\tereport(WARNING,\n\t\t\t\t(errmsg(\"abandoned index vacuuming of table \\\"%s.%s.%s\\\" as a failsafe after %d index scans\",\n\nDoesn't exactly make one understand that vacuum cost limiting now is\ndisabled? And is confusing because there would never be index vacuuming?\n\nAnd even in the cases indexes exist, it's odd to talk about abandoning\nindex vacuuming that hasn't even started yet?\n\n\n> > For 2), I don't really have a better idea than making that configurable\n> > somehow?\n> \n> That could make sense as a developer/testing option, I suppose. I just\n> doubt that it makes sense as anything else.\n\nYea, I only was thinking of making it configurable to be able to test\nit. If we change the limit to something considerably lower I wouldn't\nsee a need for that anymore.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 23 Apr 2021 17:29:21 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Fri, Apr 23, 2021 at 5:29 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2021-04-23 16:12:33 -0700, Peter Geoghegan wrote:\n> > The only reason that I chose 4GB for FAILSAFE_MIN_PAGES is because the\n> > related VACUUM_FSM_EVERY_PAGES constant was 8GB -- the latter limits\n> > how often we'll consider the failsafe in the single-pass/no-indexes\n> > case.\n>\n> I don't really understand why it makes sense to tie FAILSAFE_MIN_PAGES\n> and VACUUM_FSM_EVERY_PAGES together? They seem pretty independent to me?\n\nVACUUM_FSM_EVERY_PAGES controls how often VACUUM does work that\nusually takes place right after the two pass case finishes a round of\nindex and heap vacuuming. This is work that we certainly don't want to\ndo every time we process a single heap page in the one-pass/no-indexes\ncase. Initially this just meant FSM vacuuming, but it now includes a\nfailsafe check.\n\nOf course all of the precise details here are fairly arbitrary\n(including VACUUM_FSM_EVERY_PAGES, which has been around for a couple\nof releases now). The overall goal that I had in mind was to make the\none-pass case's use of the failsafe have analogous behavior to the\ntwo-pass/has-indexes case -- a goal which was itself somewhat\narbitrary.\n\n> The failsafe mode affects the table scan itself by disabling cost\n> limiting. As far as I can see the ways it triggers for the table scan (vs\n> truncation or index processing) are:\n>\n> 1) Before vacuuming starts, for heap phases and indexes, if already\n> necessary at that point\n> 2) For a table with indexes, before/after each index vacuum, if now\n> necessary\n> 3) On a table without indexes, every 8GB, iff there are dead tuples, if now necessary\n>\n> Why would we want to trigger the failsafe mode during a scan of a table\n> with dead tuples and no indexes, but not on a table without dead tuples\n> or with indexes but fewer than m_w_m dead tuples? That makes little\n> sense to me.\n\nWhat alternative does make sense to you?\n\nIt seemed important to put the failsafe check at points where we do\nother analogous work in all cases. We made a pragmatic trade-off. In\ntheory almost any scheme might not check often enough, and/or might\ncheck too frequently.\n\n> It seems that for the no-index case the warning message is quite off?\n\nI'll fix that up some point soon. FWIW this happened because the\nsupport for one-pass VACUUM was added quite late, at Robert's request.\n\nAnother issue with the failsafe commit is that we haven't considered\nthe autovacuum_multixact_freeze_max_age table reloption -- we only\ncheck the GUC. That might have accidentally been the right thing to\ndo, though, since the reloption is interpreted as lower than the GUC\nin all cases anyway -- arguably the\nautovacuum_multixact_freeze_max_age GUC should be all we care about\nanyway. I will need to think about this question some more, though.\n\n> > > For 2), I don't really have a better idea than making that configurable\n> > > somehow?\n> >\n> > That could make sense as a developer/testing option, I suppose. I just\n> > doubt that it makes sense as anything else.\n>\n> Yea, I only was thinking of making it configurable to be able to test\n> it. If we change the limit to something considerably lower I wouldn't\n> see a need for that anymore.\n\nIt would probably be okay to just lower it significantly. Not sure if\nthat's the best approach, though. Will pick it up next week.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 23 Apr 2021 19:15:43 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "Hi,\n\nOn 2021-04-23 19:15:43 -0700, Peter Geoghegan wrote:\n> > The failsafe mode affects the table scan itself by disabling cost\n> > limiting. As far as I can see the ways it triggers for the table scan (vs\n> > truncation or index processing) are:\n> >\n> > 1) Before vacuuming starts, for heap phases and indexes, if already\n> > necessary at that point\n> > 2) For a table with indexes, before/after each index vacuum, if now\n> > necessary\n> > 3) On a table without indexes, every 8GB, iff there are dead tuples, if now necessary\n> >\n> > Why would we want to trigger the failsafe mode during a scan of a table\n> > with dead tuples and no indexes, but not on a table without dead tuples\n> > or with indexes but fewer than m_w_m dead tuples? That makes little\n> > sense to me.\n> \n> What alternative does make sense to you?\n\nCheck it every so often, independent of whether there are indexes or\ndead tuples? Or just check it at the boundaries.\n\nI'd make it dependent on the number of pages scanned, rather than the\nblock distance to the last check - otherwise we might end up doing it\nway too often when there's only a few individual pages not in the freeze\nmap.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 23 Apr 2021 19:33:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Fri, Apr 23, 2021 at 7:33 PM Andres Freund <andres@anarazel.de> wrote:\n> Check it every so often, independent of whether there are indexes or\n> dead tuples? Or just check it at the boundaries.\n\nI think that the former suggestion might be better -- I actually\nthought about doing it that way myself.\n\nThe latter suggestion sounds like you're suggesting that we just check\nit at the beginning and the end in all cases (we do the beginning in\nall cases already, but now we'd also do the end outside of the loop in\nall cases). Is that right? If that is what you meant, then you should\nnote that there'd hardly be any check in the one-pass case with that\nscheme (apart from the initial check that we do already). The only\nwork we'd be skipping at the end (in the event of that check\ntriggering the failsafe) would be heap truncation, which (as you've\npointed out yourself) doesn't seem particularly likely to matter.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 23 Apr 2021 19:42:30 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "Hi,\n\nOn 2021-04-23 19:42:30 -0700, Peter Geoghegan wrote:\n> On Fri, Apr 23, 2021 at 7:33 PM Andres Freund <andres@anarazel.de> wrote:\n> > Check it every so often, independent of whether there are indexes or\n> > dead tuples? Or just check it at the boundaries.\n>\n> I think that the former suggestion might be better -- I actually\n> thought about doing it that way myself.\n\nCool.\n\n\n> The latter suggestion sounds like you're suggesting that we just check\n> it at the beginning and the end in all cases (we do the beginning in\n> all cases already, but now we'd also do the end outside of the loop in\n> all cases). Is that right?\n\nYes.\n\n\n> If that is what you meant, then you should note that there'd hardly be\n> any check in the one-pass case with that scheme (apart from the\n> initial check that we do already). The only work we'd be skipping at\n> the end (in the event of that check triggering the failsafe) would be\n> heap truncation, which (as you've pointed out yourself) doesn't seem\n> particularly likely to matter.\n\nI mainly suggested it because to me the current seems hard to\nunderstand. I do think it'd be better to check more often. But checking\ndepending on the amount of dead tuples at the right time doesn't strike\nme as a good idea - a lot of anti-wraparound vacuums will mainly be\nfreezing tuples, rather than removing a lot of dead rows. Which makes it\nhard to understand when the failsafe kicks in.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 23 Apr 2021 19:53:03 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Fri, Apr 23, 2021 at 7:53 PM Andres Freund <andres@anarazel.de> wrote:\n> I mainly suggested it because to me the current seems hard to\n> understand. I do think it'd be better to check more often. But checking\n> depending on the amount of dead tuples at the right time doesn't strike\n> me as a good idea - a lot of anti-wraparound vacuums will mainly be\n> freezing tuples, rather than removing a lot of dead rows. Which makes it\n> hard to understand when the failsafe kicks in.\n\nI'm convinced -- decoupling the logic from the one-pass-not-two pass\ncase seems likely to be simpler and more useful. For both the one pass\nand two pass/has indexes case.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 23 Apr 2021 19:56:38 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Fri, Apr 23, 2021 at 7:56 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I'm convinced -- decoupling the logic from the one-pass-not-two pass\n> case seems likely to be simpler and more useful. For both the one pass\n> and two pass/has indexes case.\n\nAttached draft patch does it that way.\n\n-- \nPeter Geoghegan", "msg_date": "Thu, 13 May 2021 18:03:47 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Sat, Apr 24, 2021 at 11:16 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Fri, Apr 23, 2021 at 5:29 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2021-04-23 16:12:33 -0700, Peter Geoghegan wrote:\n> > > The only reason that I chose 4GB for FAILSAFE_MIN_PAGES is because the\n> > > related VACUUM_FSM_EVERY_PAGES constant was 8GB -- the latter limits\n> > > how often we'll consider the failsafe in the single-pass/no-indexes\n> > > case.\n> >\n> > I don't really understand why it makes sense to tie FAILSAFE_MIN_PAGES\n> > and VACUUM_FSM_EVERY_PAGES together? They seem pretty independent to me?\n>\n> VACUUM_FSM_EVERY_PAGES controls how often VACUUM does work that\n> usually takes place right after the two pass case finishes a round of\n> index and heap vacuuming. This is work that we certainly don't want to\n> do every time we process a single heap page in the one-pass/no-indexes\n> case. Initially this just meant FSM vacuuming, but it now includes a\n> failsafe check.\n>\n> Of course all of the precise details here are fairly arbitrary\n> (including VACUUM_FSM_EVERY_PAGES, which has been around for a couple\n> of releases now). The overall goal that I had in mind was to make the\n> one-pass case's use of the failsafe have analogous behavior to the\n> two-pass/has-indexes case -- a goal which was itself somewhat\n> arbitrary.\n>\n> > The failsafe mode affects the table scan itself by disabling cost\n> > limiting. As far as I can see the ways it triggers for the table scan (vs\n> > truncation or index processing) are:\n> >\n> > 1) Before vacuuming starts, for heap phases and indexes, if already\n> > necessary at that point\n> > 2) For a table with indexes, before/after each index vacuum, if now\n> > necessary\n> > 3) On a table without indexes, every 8GB, iff there are dead tuples, if now necessary\n> >\n> > Why would we want to trigger the failsafe mode during a scan of a table\n> > with dead tuples and no indexes, but not on a table without dead tuples\n> > or with indexes but fewer than m_w_m dead tuples? That makes little\n> > sense to me.\n>\n> What alternative does make sense to you?\n>\n> It seemed important to put the failsafe check at points where we do\n> other analogous work in all cases. We made a pragmatic trade-off. In\n> theory almost any scheme might not check often enough, and/or might\n> check too frequently.\n>\n> > It seems that for the no-index case the warning message is quite off?\n>\n> I'll fix that up some point soon. FWIW this happened because the\n> support for one-pass VACUUM was added quite late, at Robert's request.\n\n+1 to fix this. Are you already working on fixing this? If not, I'll\npost a patch.\n\n>\n> Another issue with the failsafe commit is that we haven't considered\n> the autovacuum_multixact_freeze_max_age table reloption -- we only\n> check the GUC. That might have accidentally been the right thing to\n> do, though, since the reloption is interpreted as lower than the GUC\n> in all cases anyway -- arguably the\n> autovacuum_multixact_freeze_max_age GUC should be all we care about\n> anyway. I will need to think about this question some more, though.\n\nFWIW, I intentionally ignored the reloption there since they're\ninterpreted as lower than the GUC as you mentioned and the situation\nwhere we need to enter the failsafe mode is not the table-specific\nproblem but a system-wide problem.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 18 May 2021 14:28:53 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Mon, May 17, 2021 at 10:29 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> +1 to fix this. Are you already working on fixing this? If not, I'll\n> post a patch.\n\nI posted a patch recently (last Thursday my time). Perhaps you can review it?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 17 May 2021 22:42:25 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Tue, May 18, 2021 at 2:42 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, May 17, 2021 at 10:29 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > +1 to fix this. Are you already working on fixing this? If not, I'll\n> > post a patch.\n>\n> I posted a patch recently (last Thursday my time). Perhaps you can review it?\n\nOh, I missed that the patch includes that fix. I'll review the patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 18 May 2021 14:46:07 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Tue, May 18, 2021 at 2:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, May 18, 2021 at 2:42 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> >\n> > On Mon, May 17, 2021 at 10:29 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > +1 to fix this. Are you already working on fixing this? If not, I'll\n> > > post a patch.\n> >\n> > I posted a patch recently (last Thursday my time). Perhaps you can review it?\n>\n> Oh, I missed that the patch includes that fix. I'll review the patch.\n>\n\nI've reviewed the patch. Here is one comment:\n\n if (vacrel->num_index_scans == 0 &&\n- vacrel->rel_pages <= FAILSAFE_MIN_PAGES)\n+ vacrel->rel_pages <= FAILSAFE_EVERY_PAGES)\n return false;\n\nSince there is the condition \"vacrel->num_index_scans == 0\" we could\nenter the failsafe mode even if the table is less than 4GB, if we\nenter lazy_check_wraparound_failsafe() after executing more than one\nindex scan. Whereas a vacuum on the table that is less than 4GB and\nhas no index never enters the failsafe mode. I think we can remove\nthis condition since I don't see the reason why we don't allow to\nenter the failsafe mode only when the first-time index scan in the\ncase of such tables. What do you think?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 18 May 2021 16:09:37 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Tue, May 18, 2021 at 12:10 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> Since there is the condition \"vacrel->num_index_scans == 0\" we could\n> enter the failsafe mode even if the table is less than 4GB, if we\n> enter lazy_check_wraparound_failsafe() after executing more than one\n> index scan. Whereas a vacuum on the table that is less than 4GB and\n> has no index never enters the failsafe mode. I think we can remove\n> this condition since I don't see the reason why we don't allow to\n> enter the failsafe mode only when the first-time index scan in the\n> case of such tables. What do you think?\n\nI'm convinced -- this does seem like premature optimization now.\n\nI pushed a version of the patch that removes that code just now.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 24 May 2021 17:14:36 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Thu, Jun 10, 2021 at 10:52 AM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> I started to write a test for $Subject, which I think we sorely need.\n>\n> Currently my approach is to:\n> - start a cluster, create a few tables with test data\n> - acquire SHARE UPDATE EXCLUSIVE in a prepared transaction, to prevent\n> autovacuum from doing anything\n> - cause dead tuples to exist\n> - restart\n> - run pg_resetwal -x 2000027648\n> - do things like acquiring pins on pages that block vacuum from progressing\n> - commit prepared transaction\n> - wait for template0, template1 datfrozenxid to increase\n> - wait for relfrozenxid for most relations in postgres to increase\n> - release buffer pin\n> - wait for postgres datfrozenxid to increase\n>\n>\nCool. Thank you for working on that!\nCould you please share a WIP patch for the $subj? I'd be happy to help with\nit.\n\nSo far so good. But I've encountered a few things that stand in the way of\n> enabling such a test by default:\n>\n> 1) During startup StartupSUBTRANS() zeroes out all pages between\n> oldestActiveXID and nextXid. That takes 8s on my workstation, but only\n> because I have plenty memory - pg_subtrans ends up 14GB as I currently\n> do\n> the test. Clearly not something we could do on the BF.\n> ....\n>\n3) pg_resetwal -x requires to carefully choose an xid: It needs to be the\n> first xid on a clog page. It's not hard to determine which xids are but\n> it\n> depends on BLCKSZ and a few constants in clog.c. I've for now hardcoded\n> a\n> value appropriate for 8KB, but ...\n>\n> Maybe we can add new pg_resetwal option? Something like pg_resetwal\n--xid-near-wraparound, which will ask pg_resetwal to calculate exact xid\nvalue using values from pg_control and clog macros?\nI think it might come in handy for manual testing too.\n\n\n> I have 2 1/2 ideas about addressing 1);\n>\n> - We could exposing functionality to do advance nextXid to a future value\n> at\n> runtime, without filling in clog/subtrans pages. Would probably have to\n> live\n> in varsup.c and be exposed via regress.so or such?\n>\n> This option looks scary to me. Several functions rely on the fact that\nStartupSUBTRANS() have zeroed pages.\nAnd if we will do it conditional just for tests, it means that we won't\ntest the real code path.\n\n- The only reason StartupSUBTRANS() does that work is because of the\n> prepared\n> transaction holding back oldestActiveXID. That transaction in turn\n> exists to\n> prevent autovacuum from doing anything before we do test setup\n> steps.\n>\n\n\n>\n> Perhaps it'd be sufficient to set autovacuum_naptime really high\n> initially,\n> perform the test setup, set naptime to something lower, reload config.\n> But\n> I'm worried that might not be reliable: If something ends up allocating\n> an\n> xid we'd potentially reach the path in GetNewTransaction() that wakes up\n> the\n> launcher? But probably there wouldn't be anything doing so?\n>\n>\n Another aspect that might not make this a good choice is that it actually\n> seems relevant to be able to test cases where there are very old still\n> running transactions...\n>\n> Maybe this exact scenario can be covered with a separate long-running\ntest, not included in buildfarm test suite?\n\n-- \nBest regards,\nLubennikova Anastasia\n\nOn Thu, Jun 10, 2021 at 10:52 AM Andres Freund <andres@anarazel.de> wrote:\n\nI started to write a test for $Subject, which I think we sorely need.\n\nCurrently my approach is to:\n- start a cluster, create a few tables with test data\n- acquire SHARE UPDATE EXCLUSIVE in a prepared transaction, to prevent\n  autovacuum from doing anything\n- cause dead tuples to exist\n- restart\n- run pg_resetwal -x 2000027648\n- do things like acquiring pins on pages that block vacuum from progressing\n- commit prepared transaction\n- wait for template0, template1 datfrozenxid to increase\n- wait for relfrozenxid for most relations in postgres to increase\n- release buffer pin\n- wait for postgres datfrozenxid to increase\nCool. Thank you for working on that!Could you please share a WIP patch for the $subj? I'd be happy to help with it.\nSo far so good. But I've encountered a few things that stand in the way of\nenabling such a test by default:\n\n1) During startup StartupSUBTRANS() zeroes out all pages between\n   oldestActiveXID and nextXid. That takes 8s on my workstation, but only\n   because I have plenty memory - pg_subtrans ends up 14GB as I currently do\n   the test. Clearly not something we could do on the BF. .... \n3) pg_resetwal -x requires to carefully choose an xid: It needs to be the\n   first xid on a clog page. It's not hard to determine which xids are but it\n   depends on BLCKSZ and a few constants in clog.c. I've for now hardcoded a\n   value appropriate for 8KB, but ...\nMaybe we can add new pg_resetwal option?  Something like pg_resetwal --xid-near-wraparound, which will ask pg_resetwal to calculate exact xid value using values from pg_control and clog macros?I think it might come in handy for manual testing too.\n\nI have 2 1/2 ideas about addressing 1);\n\n- We could exposing functionality to do advance nextXid to a future value at\n  runtime, without filling in clog/subtrans pages. Would probably have to live\n  in varsup.c and be exposed via regress.so or such?\nThis option looks scary to me. Several functions rely on the fact that StartupSUBTRANS() have zeroed pages. And if we will do it conditional just for tests, it means that we won't test the real code path. \n- The only reason StartupSUBTRANS() does that work is because of the prepared\n  transaction holding back oldestActiveXID. That transaction in turn exists to\n  prevent autovacuum from doing anything before we do test setup\n  steps. \n\n  Perhaps it'd be sufficient to set autovacuum_naptime really high initially,\n  perform the test setup, set naptime to something lower, reload config. But\n  I'm worried that might not be reliable: If something ends up allocating an\n  xid we'd potentially reach the path in GetNewTransaction() that wakes up the\n  launcher?  But probably there wouldn't be anything doing so?\n \n  Another aspect that might not make this a good choice is that it actually\n  seems relevant to be able to test cases where there are very old still\n  running transactions...\nMaybe this exact scenario can be covered with a separate long-running test, not included in buildfarm test suite? -- Best regards,Lubennikova Anastasia", "msg_date": "Thu, 10 Jun 2021 16:42:01 +0300", "msg_from": "Anastasia Lubennikova <lubennikovaav@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "Hi,\n\nOn 2021-06-10 16:42:01 +0300, Anastasia Lubennikova wrote:\n> Cool. Thank you for working on that!\n> Could you please share a WIP patch for the $subj? I'd be happy to help with\n> it.\n\nI've attached the current WIP state, which hasn't evolved much since\nthis message... I put the test in src/backend/access/heap/t/001_emergency_vacuum.pl\nbut I'm not sure that's the best place. But I didn't think\nsrc/test/recovery is great either.\n\nRegards,\n\nAndres", "msg_date": "Thu, 10 Jun 2021 18:18:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Fri, Jun 11, 2021 at 10:19 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-06-10 16:42:01 +0300, Anastasia Lubennikova wrote:\n> > Cool. Thank you for working on that!\n> > Could you please share a WIP patch for the $subj? I'd be happy to help with\n> > it.\n>\n> I've attached the current WIP state, which hasn't evolved much since\n> this message... I put the test in src/backend/access/heap/t/001_emergency_vacuum.pl\n> but I'm not sure that's the best place. But I didn't think\n> src/test/recovery is great either.\n>\n\nThank you for sharing the WIP patch.\n\nRegarding point (1) you mentioned (StartupSUBTRANS() takes a long time\nfor zeroing out all pages), how about using single-user mode instead\nof preparing the transaction? That is, after pg_resetwal we check the\nages of datfrozenxid by executing a query in single-user mode. That\nway, we don’t need to worry about autovacuum concurrently running\nwhile checking the ages of frozenxids. I’ve attached a PoC patch that\ndoes the scenario like:\n\n1. start cluster with autovacuum=off and create tables with a few data\nand make garbage on them\n2. stop cluster and do pg_resetwal\n3. start cluster in single-user mode\n4. check age(datfrozenxid)\n5. stop cluster\n6. start cluster and wait for autovacuums to increase template0,\ntemplate1, and postgres datfrozenxids\n\nI put new tests in src/test/module/heap since we already have tests\nfor brin in src/test/module/brin.\n\nI think that tap test facility to run queries in single-user mode will\nalso be helpful for testing a new vacuum option/command that is\nintended to use in emergency cases and proposed here[1].\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/flat/20220128012842.GZ23027%40telsasoft.com#b76c13554f90d1c8bb5532d6f3e5cbf8\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Tue, 1 Feb 2022 11:58:55 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "Hi,\n\nOn Tue, Feb 1, 2022 at 11:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Jun 11, 2021 at 10:19 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2021-06-10 16:42:01 +0300, Anastasia Lubennikova wrote:\n> > > Cool. Thank you for working on that!\n> > > Could you please share a WIP patch for the $subj? I'd be happy to help with\n> > > it.\n> >\n> > I've attached the current WIP state, which hasn't evolved much since\n> > this message... I put the test in src/backend/access/heap/t/001_emergency_vacuum.pl\n> > but I'm not sure that's the best place. But I didn't think\n> > src/test/recovery is great either.\n> >\n>\n> Thank you for sharing the WIP patch.\n>\n> Regarding point (1) you mentioned (StartupSUBTRANS() takes a long time\n> for zeroing out all pages), how about using single-user mode instead\n> of preparing the transaction? That is, after pg_resetwal we check the\n> ages of datfrozenxid by executing a query in single-user mode. That\n> way, we don’t need to worry about autovacuum concurrently running\n> while checking the ages of frozenxids. I’ve attached a PoC patch that\n> does the scenario like:\n>\n> 1. start cluster with autovacuum=off and create tables with a few data\n> and make garbage on them\n> 2. stop cluster and do pg_resetwal\n> 3. start cluster in single-user mode\n> 4. check age(datfrozenxid)\n> 5. stop cluster\n> 6. start cluster and wait for autovacuums to increase template0,\n> template1, and postgres datfrozenxids\n\nThe above steps are wrong.\n\nI think we can expose a function in an extension used only by this\ntest in order to set nextXid to a future value with zeroing out\nclog/subtrans pages. We don't need to fill all clog/subtrans pages\nbetween oldestActiveXID and nextXid. I've attached a PoC patch for\nadding this regression test and am going to register it to the next\nCF.\n\nBTW, while testing the emergency situation, I found there is a race\ncondition where anti-wraparound vacuum isn't invoked with the settings\nautovacuum = off, autovacuum_max_workers = 1. AN autovacuum worker\nsends a signal to the postmaster after advancing datfrozenxid in\nSetTransactionIdLimit(). But with the settings, if the autovacuum\nlauncher attempts to launch a worker before the autovacuum worker who\nhas signaled to the postmaster finishes, the launcher exits without\nlaunching a worker due to no free workers. The new launcher won’t be\nlaunched until new XID is generated (and only when new XID % 65536 ==\n0). Although autovacuum_max_workers = 1 is not mandatory for this\ntest, it's easier to verify the order of operations.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Thu, 30 Jun 2022 10:40:12 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "2022年6月30日(木) 10:40 Masahiko Sawada <sawada.mshk@gmail.com>:\n>\n> Hi,\n>\n> On Tue, Feb 1, 2022 at 11:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Jun 11, 2021 at 10:19 AM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > Hi,\n> > >\n> > > On 2021-06-10 16:42:01 +0300, Anastasia Lubennikova wrote:\n> > > > Cool. Thank you for working on that!\n> > > > Could you please share a WIP patch for the $subj? I'd be happy to help with\n> > > > it.\n> > >\n> > > I've attached the current WIP state, which hasn't evolved much since\n> > > this message... I put the test in src/backend/access/heap/t/001_emergency_vacuum.pl\n> > > but I'm not sure that's the best place. But I didn't think\n> > > src/test/recovery is great either.\n> > >\n> >\n> > Thank you for sharing the WIP patch.\n> >\n> > Regarding point (1) you mentioned (StartupSUBTRANS() takes a long time\n> > for zeroing out all pages), how about using single-user mode instead\n> > of preparing the transaction? That is, after pg_resetwal we check the\n> > ages of datfrozenxid by executing a query in single-user mode. That\n> > way, we don’t need to worry about autovacuum concurrently running\n> > while checking the ages of frozenxids. I’ve attached a PoC patch that\n> > does the scenario like:\n> >\n> > 1. start cluster with autovacuum=off and create tables with a few data\n> > and make garbage on them\n> > 2. stop cluster and do pg_resetwal\n> > 3. start cluster in single-user mode\n> > 4. check age(datfrozenxid)\n> > 5. stop cluster\n> > 6. start cluster and wait for autovacuums to increase template0,\n> > template1, and postgres datfrozenxids\n>\n> The above steps are wrong.\n>\n> I think we can expose a function in an extension used only by this\n> test in order to set nextXid to a future value with zeroing out\n> clog/subtrans pages. We don't need to fill all clog/subtrans pages\n> between oldestActiveXID and nextXid. I've attached a PoC patch for\n> adding this regression test and am going to register it to the next\n> CF.\n>\n> BTW, while testing the emergency situation, I found there is a race\n> condition where anti-wraparound vacuum isn't invoked with the settings\n> autovacuum = off, autovacuum_max_workers = 1. AN autovacuum worker\n> sends a signal to the postmaster after advancing datfrozenxid in\n> SetTransactionIdLimit(). But with the settings, if the autovacuum\n> launcher attempts to launch a worker before the autovacuum worker who\n> has signaled to the postmaster finishes, the launcher exits without\n> launching a worker due to no free workers. The new launcher won’t be\n> launched until new XID is generated (and only when new XID % 65536 ==\n> 0). Although autovacuum_max_workers = 1 is not mandatory for this\n> test, it's easier to verify the order of operations.\n\nHi\n\nThanks for the patch. While reviewing the patch backlog, we have determined that\nthe latest version of this patch was submitted before meson support was\nimplemented, so it should have a \"meson.build\" file added for consideration for\ninclusion in PostgreSQL 16.\n\nRegards\n\nIan Barwick\n\n\n", "msg_date": "Wed, 16 Nov 2022 13:38:10 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On 16/11/2022 06:38, Ian Lawrence Barwick wrote:\n> Thanks for the patch. While reviewing the patch backlog, we have determined that\n> the latest version of this patch was submitted before meson support was\n> implemented, so it should have a \"meson.build\" file added for consideration for\n> inclusion in PostgreSQL 16.\n\nI wanted to do some XID wraparound testing again, to test the 64-bit \nSLRUs patches [1], and revived this.\n\nI took a different approach to consuming the XIDs. Instead of setting \nnextXID directly, bypassing GetNewTransactionId(), this patch introduces \na helper function to call GetNewTransactionId() repeatedly. But because \nthat's slow, it does include a shortcut to skip over \"uninteresting\" \nXIDs. Whenever nextXid is close to an SLRU page boundary or XID \nwraparound, it calls GetNewTransactionId(), and otherwise it bumps up \nnextXid close to the next \"interesting\" value. That's still a lot slower \nthan just setting nextXid, but exercises the code more realistically.\n\nI've written some variant of this helper function many times over the \nyears, for ad hoc testing. I'd love to have it permanently in the git tree.\n\nIn addition to Masahiko's test for emergency vacuum, this includes two \nother tests. 002_limits.pl tests the \"warn limit\" and \"stop limit\" in \nGetNewTransactionId(), and 003_wraparound.pl burns through 10 billion \ntransactions in total, exercising XID wraparound in general. \nUnfortunately these tests are pretty slow; the tests run for about 4 \nminutes on my laptop in total, and use about 20 GB of disk space. So \nperhaps these need to be put in a special test suite that's not run as \npart of \"check-world\". Or perhaps leave out the 003_wraparounds.pl test, \nthat's the slowest of the tests. But I'd love to have these in the git \ntree in some form.\n\n[1] \nhttps://www.postgresql.org/message-id/CAJ7c6TPKf0W3MfpP2vr=kq7-NM5G12vTBhi7miu_5m8AG3Cw-w@mail.gmail.com)\n\n- Heikki\n\n\n\n", "msg_date": "Fri, 3 Mar 2023 13:34:50 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On 03/03/2023 13:34, Heikki Linnakangas wrote:\n> On 16/11/2022 06:38, Ian Lawrence Barwick wrote:\n>> Thanks for the patch. While reviewing the patch backlog, we have determined that\n>> the latest version of this patch was submitted before meson support was\n>> implemented, so it should have a \"meson.build\" file added for consideration for\n>> inclusion in PostgreSQL 16.\n> \n> I wanted to do some XID wraparound testing again, to test the 64-bit\n> SLRUs patches [1], and revived this.\n\nForgot attachment.\n\n- Heikki", "msg_date": "Fri, 3 Mar 2023 15:41:55 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Fri, Mar 3, 2023 at 8:34 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 16/11/2022 06:38, Ian Lawrence Barwick wrote:\n> > Thanks for the patch. While reviewing the patch backlog, we have determined that\n> > the latest version of this patch was submitted before meson support was\n> > implemented, so it should have a \"meson.build\" file added for consideration for\n> > inclusion in PostgreSQL 16.\n>\n> I wanted to do some XID wraparound testing again, to test the 64-bit\n> SLRUs patches [1], and revived this.\n\nThank you for reviving this thread!\n\n>\n> I took a different approach to consuming the XIDs. Instead of setting\n> nextXID directly, bypassing GetNewTransactionId(), this patch introduces\n> a helper function to call GetNewTransactionId() repeatedly. But because\n> that's slow, it does include a shortcut to skip over \"uninteresting\"\n> XIDs. Whenever nextXid is close to an SLRU page boundary or XID\n> wraparound, it calls GetNewTransactionId(), and otherwise it bumps up\n> nextXid close to the next \"interesting\" value. That's still a lot slower\n> than just setting nextXid, but exercises the code more realistically.\n>\n> I've written some variant of this helper function many times over the\n> years, for ad hoc testing. I'd love to have it permanently in the git tree.\n\nThese functions seem to be better than mine.\n\n> In addition to Masahiko's test for emergency vacuum, this includes two\n> other tests. 002_limits.pl tests the \"warn limit\" and \"stop limit\" in\n> GetNewTransactionId(), and 003_wraparound.pl burns through 10 billion\n> transactions in total, exercising XID wraparound in general.\n> Unfortunately these tests are pretty slow; the tests run for about 4\n> minutes on my laptop in total, and use about 20 GB of disk space. So\n> perhaps these need to be put in a special test suite that's not run as\n> part of \"check-world\". Or perhaps leave out the 003_wraparounds.pl test,\n> that's the slowest of the tests. But I'd love to have these in the git\n> tree in some form.\n\ncbfot reports some failures. The main reason seems that meson.build in\nxid_wraparound directory adds the regression tests but the .sql and\n.out files are missing in the patch. Perhaps the patch wants to add\nonly tap tests as Makefile doesn't define REGRESS?\n\nEven after fixing this issue, CI tests (Cirrus CI) are not happy and\nreport failures due to a disk full. The size of xid_wraparound test\ndirectory is 105MB out of 262MB:\n\n% du -sh testrun\n262M testrun\n% du -sh testrun/xid_wraparound/\n105M testrun/xid_wraparound/\n% du -sh testrun/xid_wraparound/*\n460K testrun/xid_wraparound/001_emergency_vacuum\n93M testrun/xid_wraparound/002_limits\n12M testrun/xid_wraparound/003_wraparounds\n% ls -lh testrun/xid_wraparound/002_limits/log*\ntotal 93M\n-rw-------. 1 masahiko masahiko 93M Mar 7 17:34 002_limits_wraparound.log\n-rw-rw-r--. 1 masahiko masahiko 20K Mar 7 17:34 regress_log_002_limits\n\nThe biggest file is the server logs since an autovacuum worker writes\nautovacuum logs for every table for every second (autovacuum_naptime\nis 1s). Maybe we can set log_autovacuum_min_duration reloption for the\ntest tables instead of globally enabling it\n\nThe 001 test uses the 2PC transaction that holds locks on tables but\nsince we can consume xids while the server running, we don't need\nthat. Instead I think we can keep a transaction open in the background\nlike 002 test does.\n\nI'll try these ideas.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 8 Mar 2023 13:52:31 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Fri, Mar 3, 2023 at 3:34 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> I took a different approach to consuming the XIDs. Instead of setting\n> nextXID directly, bypassing GetNewTransactionId(), this patch introduces\n> a helper function to call GetNewTransactionId() repeatedly. But because\n> that's slow, it does include a shortcut to skip over \"uninteresting\"\n> XIDs. Whenever nextXid is close to an SLRU page boundary or XID\n> wraparound, it calls GetNewTransactionId(), and otherwise it bumps up\n> nextXid close to the next \"interesting\" value. That's still a lot slower\n> than just setting nextXid, but exercises the code more realistically.\n\nSurely your tap test should be using single user mode? Perhaps you\nmissed the obnoxious HINT, that's part of the WARNING that the test\nparses? ;-)\n\nThis is a very useful patch. I certainly don't want to make life\nharder by (say) connecting it to the single user mode problem.\nBut...the single user mode thing really needs to go away. It's just\nterrible advice, and actively harms users.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 7 Mar 2023 21:21:00 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Tue, Mar 07, 2023 at 09:21:00PM -0800, Peter Geoghegan wrote:\n> Surely your tap test should be using single user mode? Perhaps you\n> missed the obnoxious HINT, that's part of the WARNING that the test\n> parses? ;-)\n\nI may be missing something, but you cannot use directly a \"postgres\"\ncommand in a TAP test, can you? See 1a9d802, that has fixed a problem\nwhen TAP tests run with a privileged account on Windows.\n--\nMichael", "msg_date": "Thu, 9 Mar 2023 15:46:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Wed, Mar 8, 2023 at 10:47 PM Michael Paquier <michael@paquier.xyz> wrote:\n> I may be missing something, but you cannot use directly a \"postgres\"\n> command in a TAP test, can you? See 1a9d802, that has fixed a problem\n> when TAP tests run with a privileged account on Windows.\n\nI was joking. But I did have a real point: once we have tests for the\nxidStopLimit mechanism, why not take the opportunity to correct the\nlong standing issue with the documentation advising the use of single\nuser mode?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 11 Mar 2023 20:46:24 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Sat, Mar 11, 2023 at 8:47 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I was joking. But I did have a real point: once we have tests for the\n> xidStopLimit mechanism, why not take the opportunity to correct the\n> long standing issue with the documentation advising the use of single\n> user mode?\n\nDoes https://commitfest.postgresql.org/42/4128/ address that\nindependently enough?\n\n--Jacob\n\n\n", "msg_date": "Mon, 13 Mar 2023 15:25:46 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Mon, Mar 13, 2023 at 3:25 PM Jacob Champion <jchampion@timescale.com> wrote:\n> Does https://commitfest.postgresql.org/42/4128/ address that\n> independently enough?\n\nI wasn't aware of that patch. It looks like it does exactly what I was\narguing in favor of. So yes.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 13 Mar 2023 16:24:48 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Wed, Mar 8, 2023 at 1:52 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Mar 3, 2023 at 8:34 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >\n> > On 16/11/2022 06:38, Ian Lawrence Barwick wrote:\n> > > Thanks for the patch. While reviewing the patch backlog, we have determined that\n> > > the latest version of this patch was submitted before meson support was\n> > > implemented, so it should have a \"meson.build\" file added for consideration for\n> > > inclusion in PostgreSQL 16.\n> >\n> > I wanted to do some XID wraparound testing again, to test the 64-bit\n> > SLRUs patches [1], and revived this.\n>\n> Thank you for reviving this thread!\n>\n> >\n> > I took a different approach to consuming the XIDs. Instead of setting\n> > nextXID directly, bypassing GetNewTransactionId(), this patch introduces\n> > a helper function to call GetNewTransactionId() repeatedly. But because\n> > that's slow, it does include a shortcut to skip over \"uninteresting\"\n> > XIDs. Whenever nextXid is close to an SLRU page boundary or XID\n> > wraparound, it calls GetNewTransactionId(), and otherwise it bumps up\n> > nextXid close to the next \"interesting\" value. That's still a lot slower\n> > than just setting nextXid, but exercises the code more realistically.\n> >\n> > I've written some variant of this helper function many times over the\n> > years, for ad hoc testing. I'd love to have it permanently in the git tree.\n>\n> These functions seem to be better than mine.\n>\n> > In addition to Masahiko's test for emergency vacuum, this includes two\n> > other tests. 002_limits.pl tests the \"warn limit\" and \"stop limit\" in\n> > GetNewTransactionId(), and 003_wraparound.pl burns through 10 billion\n> > transactions in total, exercising XID wraparound in general.\n> > Unfortunately these tests are pretty slow; the tests run for about 4\n> > minutes on my laptop in total, and use about 20 GB of disk space. So\n> > perhaps these need to be put in a special test suite that's not run as\n> > part of \"check-world\". Or perhaps leave out the 003_wraparounds.pl test,\n> > that's the slowest of the tests. But I'd love to have these in the git\n> > tree in some form.\n>\n> cbfot reports some failures. The main reason seems that meson.build in\n> xid_wraparound directory adds the regression tests but the .sql and\n> .out files are missing in the patch. Perhaps the patch wants to add\n> only tap tests as Makefile doesn't define REGRESS?\n>\n> Even after fixing this issue, CI tests (Cirrus CI) are not happy and\n> report failures due to a disk full. The size of xid_wraparound test\n> directory is 105MB out of 262MB:\n>\n> % du -sh testrun\n> 262M testrun\n> % du -sh testrun/xid_wraparound/\n> 105M testrun/xid_wraparound/\n> % du -sh testrun/xid_wraparound/*\n> 460K testrun/xid_wraparound/001_emergency_vacuum\n> 93M testrun/xid_wraparound/002_limits\n> 12M testrun/xid_wraparound/003_wraparounds\n> % ls -lh testrun/xid_wraparound/002_limits/log*\n> total 93M\n> -rw-------. 1 masahiko masahiko 93M Mar 7 17:34 002_limits_wraparound.log\n> -rw-rw-r--. 1 masahiko masahiko 20K Mar 7 17:34 regress_log_002_limits\n>\n> The biggest file is the server logs since an autovacuum worker writes\n> autovacuum logs for every table for every second (autovacuum_naptime\n> is 1s). Maybe we can set log_autovacuum_min_duration reloption for the\n> test tables instead of globally enabling it\n\nI think it could be acceptable since 002 and 003 tests are executed\nonly when required. And 001 test seems to be able to pass on cfbot but\nit takes more than 30 sec. In the attached patch, I made these tests\noptional and these are enabled if envar ENABLE_XID_WRAPAROUND_TESTS is\ndefined (supporting only autoconf).\n\n>\n> The 001 test uses the 2PC transaction that holds locks on tables but\n> since we can consume xids while the server running, we don't need\n> that. Instead I think we can keep a transaction open in the background\n> like 002 test does.\n\nUpdated in the new patch. Also, I added a check if the failsafe mode\nis triggered.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 14 Mar 2023 15:01:30 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "I agree having the new functions in the tree is useful. I also tried\nrunning the TAP tests in v2, but 001 and 002 fail to run:\n\nOdd number of elements in hash assignment at [...]/Cluster.pm line 2010.\nCan't locate object method \"pump_nb\" via package\n\"PostgreSQL::Test::BackgroundPsql\" at [...]\n\nIt seems to be complaining about\n\n+my $in = '';\n+my $out = '';\n+my $timeout = IPC::Run::timer($PostgreSQL::Test::Utils::timeout_default);\n+my $background_psql = $node->background_psql('postgres', \\$in, \\$out,\n$timeout);\n\n...that call to background_psql doesn't look like other ones that have \"key\n=> value\". Is there something I'm missing?\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nI agree having the new functions in the tree is useful. I also tried running the TAP tests in v2, but 001 and 002 fail to run:Odd number of elements in hash assignment at [...]/Cluster.pm line 2010.Can't locate object method \"pump_nb\" via package \"PostgreSQL::Test::BackgroundPsql\" at [...]It seems to be complaining about+my $in  = '';+my $out = '';+my $timeout = IPC::Run::timer($PostgreSQL::Test::Utils::timeout_default);+my $background_psql = $node->background_psql('postgres', \\$in, \\$out, $timeout);...that call to background_psql doesn't look like other ones that have \"key => value\". Is there something I'm missing?--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Fri, 21 Apr 2023 10:02:31 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Fri, Apr 21, 2023 at 12:02 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n>\n> I agree having the new functions in the tree is useful. I also tried running the TAP tests in v2, but 001 and 002 fail to run:\n>\n> Odd number of elements in hash assignment at [...]/Cluster.pm line 2010.\n> Can't locate object method \"pump_nb\" via package \"PostgreSQL::Test::BackgroundPsql\" at [...]\n>\n> It seems to be complaining about\n>\n> +my $in = '';\n> +my $out = '';\n> +my $timeout = IPC::Run::timer($PostgreSQL::Test::Utils::timeout_default);\n> +my $background_psql = $node->background_psql('postgres', \\$in, \\$out, $timeout);\n>\n> ...that call to background_psql doesn't look like other ones that have \"key => value\". Is there something I'm missing?\n\nThanks for reporting. I think that the patch needs to be updated since\ncommit 664d757531e1 changed background psql TAP functions. I've\nattached the updated patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 27 Apr 2023 23:06:40 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "> On 27 Apr 2023, at 16:06, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> On Fri, Apr 21, 2023 at 12:02 PM John Naylor\n> <john.naylor@enterprisedb.com> wrote:\n\n>> ...that call to background_psql doesn't look like other ones that have \"key => value\". Is there something I'm missing?\n> \n> Thanks for reporting. I think that the patch needs to be updated since\n> commit 664d757531e1 changed background psql TAP functions. I've\n> attached the updated patch.\n\nIs there a risk that the background psql will time out on slow systems during\nthe consumption of 2B xid's? Since you mainly want to hold it open for the\nduration of testing you might want to bump it to avoid false negatives on slow\ntest systems.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 27 Apr 2023 16:12:15 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Thu, Apr 27, 2023 at 9:12 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 27 Apr 2023, at 16:06, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > On Fri, Apr 21, 2023 at 12:02 PM John Naylor\n> > <john.naylor@enterprisedb.com> wrote:\n>\n> >> ...that call to background_psql doesn't look like other ones that have\n\"key => value\". Is there something I'm missing?\n> >\n> > Thanks for reporting. I think that the patch needs to be updated since\n> > commit 664d757531e1 changed background psql TAP functions. I've\n> > attached the updated patch.\n\nThanks, it passes for me now.\n\n> Is there a risk that the background psql will time out on slow systems\nduring\n> the consumption of 2B xid's? Since you mainly want to hold it open for\nthe\n> duration of testing you might want to bump it to avoid false negatives on\nslow\n> test systems.\n\nIf they're that slow, I'd worry more about generating 20GB of xact status\ndata. That's why the tests are disabled by default.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Apr 27, 2023 at 9:12 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 27 Apr 2023, at 16:06, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > On Fri, Apr 21, 2023 at 12:02 PM John Naylor\n> > <john.naylor@enterprisedb.com> wrote:\n>\n> >> ...that call to background_psql doesn't look like other ones that have\n> \"key => value\". Is there something I'm missing?\n> >\n> > Thanks for reporting. I think that the patch needs to be updated since\n> > commit 664d757531e1 changed background psql TAP functions. I've\n> > attached the updated patch.\n>\n> Is there a risk that the background psql will time out on slow systems\n> during\n> the consumption of 2B xid's? Since you mainly want to hold it open for the\n> duration of testing you might want to bump it to avoid false negatives on\n> slow\n> test systems.\n>\n> --\n> Daniel Gustafsson\n>\n>\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Apr 27, 2023 at 9:12 PM Daniel Gustafsson <daniel@yesql.se> wrote:>> > On 27 Apr 2023, at 16:06, Masahiko Sawada <sawada.mshk@gmail.com> wrote:> > On Fri, Apr 21, 2023 at 12:02 PM John Naylor> > <john.naylor@enterprisedb.com> wrote:>> >> ...that call to background_psql doesn't look like other ones that have \"key => value\". Is there something I'm missing?> >> > Thanks for reporting. I think that the patch needs to be updated since> > commit 664d757531e1 changed background psql TAP functions. I've> > attached the updated patch.Thanks, it passes for me now.> Is there a risk that the background psql will time out on slow systems during> the consumption of 2B xid's?  Since you mainly want to hold it open for the> duration of testing you might want to bump it to avoid false negatives on slow> test systems.If they're that slow, I'd worry more about generating 20GB of xact status data. That's why the tests are disabled by default.--John NaylorEDB: http://www.enterprisedb.comOn Thu, Apr 27, 2023 at 9:12 PM Daniel Gustafsson <daniel@yesql.se> wrote:> On 27 Apr 2023, at 16:06, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> On Fri, Apr 21, 2023 at 12:02 PM John Naylor\n> <john.naylor@enterprisedb.com> wrote:\n\n>> ...that call to background_psql doesn't look like other ones that have \"key => value\". Is there something I'm missing?\n> \n> Thanks for reporting. I think that the patch needs to be updated since\n> commit 664d757531e1 changed background psql TAP functions. I've\n> attached the updated patch.\n\nIs there a risk that the background psql will time out on slow systems during\nthe consumption of 2B xid's?  Since you mainly want to hold it open for the\nduration of testing you might want to bump it to avoid false negatives on slow\ntest systems.\n\n--\nDaniel Gustafsson\n\n-- John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Fri, 28 Apr 2023 10:21:24 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> On Thu, Apr 27, 2023 at 9:12 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> Is there a risk that the background psql will time out on slow systems during\n>> the consumption of 2B xid's? Since you mainly want to hold it open for the\n>> duration of testing you might want to bump it to avoid false negatives on\n>> slow test systems.\n\n> If they're that slow, I'd worry more about generating 20GB of xact status\n> data. That's why the tests are disabled by default.\n\nThere is exactly zero chance that anyone will accept the introduction\nof such an expensive test into either check-world or the buildfarm\nsequence.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 28 Apr 2023 00:42:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "> On 28 Apr 2023, at 06:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> John Naylor <john.naylor@enterprisedb.com> writes:\n\n>> If they're that slow, I'd worry more about generating 20GB of xact status\n>> data. That's why the tests are disabled by default.\n> \n> There is exactly zero chance that anyone will accept the introduction\n> of such an expensive test into either check-world or the buildfarm\n> sequence.\n\nEven though the entire suite is disabled by default, shouldn't it also require\nPG_TEST_EXTRA to be consistent with other off-by-default suites like for example\nsrc/test/kerberos?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 28 Apr 2023 10:49:45 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Thu, Apr 27, 2023 at 11:12 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 27 Apr 2023, at 16:06, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > On Fri, Apr 21, 2023 at 12:02 PM John Naylor\n> > <john.naylor@enterprisedb.com> wrote:\n>\n> >> ...that call to background_psql doesn't look like other ones that have \"key => value\". Is there something I'm missing?\n> >\n> > Thanks for reporting. I think that the patch needs to be updated since\n> > commit 664d757531e1 changed background psql TAP functions. I've\n> > attached the updated patch.\n>\n> Is there a risk that the background psql will time out on slow systems during\n> the consumption of 2B xid's? Since you mainly want to hold it open for the\n> duration of testing you might want to bump it to avoid false negatives on slow\n> test systems.\n\nAgreed. The timeout can be set by manually setting\nPG_TEST_TIMEOUT_DEFAULT, but I bump it to 10 min by default. And it\nnow require setting PG_TET_EXTRA to run it.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 12 Jul 2023 16:52:23 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "> On 12 Jul 2023, at 09:52, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\n> Agreed. The timeout can be set by manually setting\n> PG_TEST_TIMEOUT_DEFAULT, but I bump it to 10 min by default. And it\n> now require setting PG_TET_EXTRA to run it.\n\n+# bump the query timeout to avoid false negatives on slow test syetems.\ntypo: s/syetems/systems/\n\n\n+# bump the query timeout to avoid false negatives on slow test syetems.\n+$ENV{PG_TEST_TIMEOUT_DEFAULT} = 600;\nDoes this actually work? Utils.pm read the environment variable at compile\ntime in the BEGIN block so this setting won't be seen? A quick testprogram\nseems to confirm this but I might be missing something.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 12 Jul 2023 13:47:51 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Wed, Jul 12, 2023 at 01:47:51PM +0200, Daniel Gustafsson wrote:\n> +# bump the query timeout to avoid false negatives on slow test syetems.\n> +$ENV{PG_TEST_TIMEOUT_DEFAULT} = 600;\n> Does this actually work? Utils.pm read the environment variable at compile\n> time in the BEGIN block so this setting won't be seen? A quick testprogram\n> seems to confirm this but I might be missing something.\n\nI wish that this test were cheaper, without a need to depend on\nPG_TEST_EXTRA.. Actually, note that you are forgetting to update the\ndocumentation of PG_TEST_EXTRA with this new value of xid_wraparound.\n--\nMichael", "msg_date": "Tue, 22 Aug 2023 14:49:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "> On 22 Aug 2023, at 07:49, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Jul 12, 2023 at 01:47:51PM +0200, Daniel Gustafsson wrote:\n>> +# bump the query timeout to avoid false negatives on slow test syetems.\n>> +$ENV{PG_TEST_TIMEOUT_DEFAULT} = 600;\n>> Does this actually work? Utils.pm read the environment variable at compile\n>> time in the BEGIN block so this setting won't be seen? A quick testprogram\n>> seems to confirm this but I might be missing something.\n> \n> I wish that this test were cheaper, without a need to depend on\n> PG_TEST_EXTRA.. Actually, note that you are forgetting to update the\n> documentation of PG_TEST_EXTRA with this new value of xid_wraparound.\n\nAgreed, it would be nice, but I don't see any way to achieve that. I still\nthink the test is worthwhile to add, once the upthread mentioned issues are\nresolved.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 25 Aug 2023 11:26:26 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Wed, Jul 12, 2023 at 01:47:51PM +0200, Daniel Gustafsson wrote:\n> > On 12 Jul 2023, at 09:52, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > Agreed. The timeout can be set by manually setting\n> > PG_TEST_TIMEOUT_DEFAULT, but I bump it to 10 min by default. And it\n> > now require setting PG_TET_EXTRA to run it.\n> \n> +# bump the query timeout to avoid false negatives on slow test syetems.\n> typo: s/syetems/systems/\n> \n> \n> +# bump the query timeout to avoid false negatives on slow test syetems.\n> +$ENV{PG_TEST_TIMEOUT_DEFAULT} = 600;\n> Does this actually work? Utils.pm read the environment variable at compile\n> time in the BEGIN block so this setting won't be seen? A quick testprogram\n> seems to confirm this but I might be missing something.\n\nThe correct way to get a longer timeout is \"IPC::Run::timer(4 *\n$PostgreSQL::Test::Utils::timeout_default);\". Even if changing env worked,\nthat would be removing the ability for even-slower systems to set timeouts\ngreater than 10min.\n\n\n", "msg_date": "Sat, 2 Sep 2023 22:48:01 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "Sorry for the late reply.\n\nOn Sun, Sep 3, 2023 at 2:48 PM Noah Misch <noah@leadboat.com> wrote:\n>\n> On Wed, Jul 12, 2023 at 01:47:51PM +0200, Daniel Gustafsson wrote:\n> > > On 12 Jul 2023, at 09:52, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > Agreed. The timeout can be set by manually setting\n> > > PG_TEST_TIMEOUT_DEFAULT, but I bump it to 10 min by default. And it\n> > > now require setting PG_TET_EXTRA to run it.\n> >\n> > +# bump the query timeout to avoid false negatives on slow test syetems.\n> > typo: s/syetems/systems/\n> >\n> >\n> > +# bump the query timeout to avoid false negatives on slow test syetems.\n> > +$ENV{PG_TEST_TIMEOUT_DEFAULT} = 600;\n> > Does this actually work? Utils.pm read the environment variable at compile\n> > time in the BEGIN block so this setting won't be seen? A quick testprogram\n> > seems to confirm this but I might be missing something.\n>\n> The correct way to get a longer timeout is \"IPC::Run::timer(4 *\n> $PostgreSQL::Test::Utils::timeout_default);\". Even if changing env worked,\n> that would be removing the ability for even-slower systems to set timeouts\n> greater than 10min.\n\nAgreed.\n\nI've attached new version patches. 0001 patch adds an option to\nbackground_psql to specify the timeout seconds, and 0002 patch is the\nmain regression test patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 27 Sep 2023 21:39:45 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "> On 27 Sep 2023, at 14:39, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\n> I've attached new version patches. 0001 patch adds an option to\n> background_psql to specify the timeout seconds, and 0002 patch is the\n> main regression test patch.\n\n-=item PostgreSQL::Test::BackgroundPsql->new(interactive, @params)\n+=item PostgreSQL::Test::BackgroundPsql->new(interactive, @params, timeout)\n\nLooking at this I notice that I made a typo in 664d757531e, the =item line\nshould have \"@psql_params\" and not \"@params\". Perhaps you can fix that minor\nthing while in there?\n\n\n+\t$timeout = $params{timeout} if defined $params{timeout};\n\nI think this should be documented in the background_psql POD docs.\n\n\n+ Not enabled by default it is resource intensive.\n\nThis sentence is missing a \"because\", should read: \"..by default *because* it\nis..\"\n\n\n+# Bump the query timeout to avoid false negatives on slow test systems.\n+my $psql_timeout_secs = 4 * $PostgreSQL::Test::Utils::timeout_default;\n\nShould we bump the timeout like this for all systems? I interpreted Noah's\ncomment such that it should be possible for slower systems to override, not\nthat it should be extended everywhere, but I might have missed something.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 29 Sep 2023 12:17:04 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Thu, 28 Sept 2023 at 03:55, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Sorry for the late reply.\n>\n> On Sun, Sep 3, 2023 at 2:48 PM Noah Misch <noah@leadboat.com> wrote:\n> >\n> > On Wed, Jul 12, 2023 at 01:47:51PM +0200, Daniel Gustafsson wrote:\n> > > > On 12 Jul 2023, at 09:52, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > Agreed. The timeout can be set by manually setting\n> > > > PG_TEST_TIMEOUT_DEFAULT, but I bump it to 10 min by default. And it\n> > > > now require setting PG_TET_EXTRA to run it.\n> > >\n> > > +# bump the query timeout to avoid false negatives on slow test syetems.\n> > > typo: s/syetems/systems/\n> > >\n> > >\n> > > +# bump the query timeout to avoid false negatives on slow test syetems.\n> > > +$ENV{PG_TEST_TIMEOUT_DEFAULT} = 600;\n> > > Does this actually work? Utils.pm read the environment variable at compile\n> > > time in the BEGIN block so this setting won't be seen? A quick testprogram\n> > > seems to confirm this but I might be missing something.\n> >\n> > The correct way to get a longer timeout is \"IPC::Run::timer(4 *\n> > $PostgreSQL::Test::Utils::timeout_default);\". Even if changing env worked,\n> > that would be removing the ability for even-slower systems to set timeouts\n> > greater than 10min.\n>\n> Agreed.\n>\n> I've attached new version patches. 0001 patch adds an option to\n> background_psql to specify the timeout seconds, and 0002 patch is the\n> main regression test patch.\n\nFew comments:\n1) Should we have some validation for the inputs given:\n+PG_FUNCTION_INFO_V1(consume_xids_until);\n+Datum\n+consume_xids_until(PG_FUNCTION_ARGS)\n+{\n+ FullTransactionId targetxid =\nFullTransactionIdFromU64((uint64) PG_GETARG_INT64(0));\n+ FullTransactionId lastxid;\n+\n+ if (!FullTransactionIdIsNormal(targetxid))\n+ elog(ERROR, \"targetxid %llu is not normal\", (unsigned\nlong long) U64FromFullTransactionId(targetxid));\n\nIf not it will take inputs like -1 and 999999999999999.\nAlso the notice messages might confuse for the above values, as it\nwill show a different untilxid value like the below:\npostgres=# SELECT consume_xids_until(999999999999999);\nNOTICE: consumed up to 0:10000809 / 232830:2764472319\n\n2) Should this be added after worker_spi as we generally add it in the\nalphabetical order:\ndiff --git a/src/test/modules/meson.build b/src/test/modules/meson.build\nindex fcd643f6f1..4054bde84c 100644\n--- a/src/test/modules/meson.build\n+++ b/src/test/modules/meson.build\n@@ -10,6 +10,7 @@ subdir('libpq_pipeline')\n subdir('plsample')\n subdir('spgist_name_ops')\n subdir('ssl_passphrase_callback')\n+subdir('xid_wraparound')\n subdir('test_bloomfilter')\n\n3) Similarly here too:\nindex e81873cb5a..a4c845ab4a 100644\n--- a/src/test/modules/Makefile\n+++ b/src/test/modules/Makefile\n@@ -13,6 +13,7 @@ SUBDIRS = \\\n libpq_pipeline \\\n plsample \\\n spgist_name_ops \\\n+ xid_wraparound \\\n test_bloomfilter \\\n\n4) The following includes are not required transam.h, fmgr.h, lwlock.h\n+ * src/test/modules/xid_wraparound/xid_wraparound.c\n+ *\n+ * -------------------------------------------------------------------------\n+ */\n+#include \"postgres.h\"\n+\n+#include \"access/transam.h\"\n+#include \"access/xact.h\"\n+#include \"fmgr.h\"\n+#include \"miscadmin.h\"\n+#include \"storage/lwlock.h\"\n+#include \"storage/proc.h\"\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 29 Sep 2023 15:49:54 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Fri, Sep 29, 2023 at 12:17:04PM +0200, Daniel Gustafsson wrote:\n> +# Bump the query timeout to avoid false negatives on slow test systems.\n> +my $psql_timeout_secs = 4 * $PostgreSQL::Test::Utils::timeout_default;\n> \n> Should we bump the timeout like this for all systems? I interpreted Noah's\n> comment such that it should be possible for slower systems to override, not\n> that it should be extended everywhere, but I might have missed something.\n\nThis is the conventional way to do it. For an operation far slower than a\ntypical timeout_default situation, the patch can and should dilate the default\ntimeout like this. The patch version as of my last comment was extending the\ntimeout but also blocking users from extending it further via\nPG_TEST_TIMEOUT_DEFAULT. The latest version restores PG_TEST_TIMEOUT_DEFAULT\nreactivity, resolving my comment.\n\n\n", "msg_date": "Fri, 29 Sep 2023 06:57:21 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Fri, Sep 29, 2023 at 7:17 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 27 Sep 2023, at 14:39, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> > I've attached new version patches. 0001 patch adds an option to\n> > background_psql to specify the timeout seconds, and 0002 patch is the\n> > main regression test patch.\n>\n> -=item PostgreSQL::Test::BackgroundPsql->new(interactive, @params)\n> +=item PostgreSQL::Test::BackgroundPsql->new(interactive, @params, timeout)\n>\n> Looking at this I notice that I made a typo in 664d757531e, the =item line\n> should have \"@psql_params\" and not \"@params\". Perhaps you can fix that minor\n> thing while in there?\n>\n>\n> + $timeout = $params{timeout} if defined $params{timeout};\n>\n> I think this should be documented in the background_psql POD docs.\n\nWhile updating the documentation, I found the following description:\n\n=item $node->background_psql($dbname, %params) =>\nPostgreSQL::Test::BackgroundPsql inst$\nInvoke B<psql> on B<$dbname> and return a BackgroundPsql object.\n\nA default timeout of $PostgreSQL::Test::Utils::timeout_default is set up,\nwhich can be modified later.\n\nIs it true that we can modify the timeout after creating\nBackgroundPsql object? If so, it seems we don't need to introduce the\nnew timeout argument. But how?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 27 Nov 2023 22:06:15 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "> On 27 Nov 2023, at 14:06, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\n> Is it true that we can modify the timeout after creating\n> BackgroundPsql object? If so, it seems we don't need to introduce the\n> new timeout argument. But how?\n\nI can't remember if that's leftovers that incorrectly remains from an earlier\nversion of the BackgroundPsql work, or if it's a very bad explanation of\n->set_query_timer_restart(). The timeout will use the timeout_default value\nand that cannot be overridden, it can only be reset per query.\n\nWith your patch the timeout still cannot be changed, but at least set during\nstart which seems good enough until there are tests warranting more complexity.\nThe docs should be corrected to reflect this in your patch.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 27 Nov 2023 14:40:09 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Mon, Nov 27, 2023 at 10:40 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 27 Nov 2023, at 14:06, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> > Is it true that we can modify the timeout after creating\n> > BackgroundPsql object? If so, it seems we don't need to introduce the\n> > new timeout argument. But how?\n>\n> I can't remember if that's leftovers that incorrectly remains from an earlier\n> version of the BackgroundPsql work, or if it's a very bad explanation of\n> ->set_query_timer_restart(). The timeout will use the timeout_default value\n> and that cannot be overridden, it can only be reset per query.\n\nThank you for confirming this. I see there is the same problem also in\ninteractive_psql(). So I've attached the 0001 patch to fix these\ndocumentation issues. Which could be backpatched.\n\n> With your patch the timeout still cannot be changed, but at least set during\n> start which seems good enough until there are tests warranting more complexity.\n> The docs should be corrected to reflect this in your patch.\n\nI've incorporated the comments except for the following one and\nattached updated version of the rest patches:\n\nOn Fri, Sep 29, 2023 at 7:20 PM vignesh C <vignesh21@gmail.com> wrote:\n> Few comments:\n> 1) Should we have some validation for the inputs given:\n> +PG_FUNCTION_INFO_V1(consume_xids_until);\n> +Datum\n> +consume_xids_until(PG_FUNCTION_ARGS)\n> +{\n> + FullTransactionId targetxid =\n> FullTransactionIdFromU64((uint64) PG_GETARG_INT64(0));\n> + FullTransactionId lastxid;\n> +\n> + if (!FullTransactionIdIsNormal(targetxid))\n> + elog(ERROR, \"targetxid %llu is not normal\", (unsigned\n> long long) U64FromFullTransactionId(targetxid));\n>\n> If not it will take inputs like -1 and 999999999999999.\n> Also the notice messages might confuse for the above values, as it\n> will show a different untilxid value like the below:\n> postgres=# SELECT consume_xids_until(999999999999999);\n> NOTICE: consumed up to 0:10000809 / 232830:2764472319\n\nThe full transaction ids shown in the notice messages are separated\ninto epoch and xid so it's not a different value. This epoch-and-xid\nstyle is used also in pg_controldata output and makes sense to me to\nshow the progress of xid consumption.\n\nOnce the new test gets committed, I'll prepare a new buildfarm animal\nfor that if possible.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 28 Nov 2023 11:00:53 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "> On 28 Nov 2023, at 03:00, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> \n> On Mon, Nov 27, 2023 at 10:40 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> \n>>> On 27 Nov 2023, at 14:06, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>> \n>>> Is it true that we can modify the timeout after creating\n>>> BackgroundPsql object? If so, it seems we don't need to introduce the\n>>> new timeout argument. But how?\n>> \n>> I can't remember if that's leftovers that incorrectly remains from an earlier\n>> version of the BackgroundPsql work, or if it's a very bad explanation of\n>> ->set_query_timer_restart(). The timeout will use the timeout_default value\n>> and that cannot be overridden, it can only be reset per query.\n> \n> Thank you for confirming this. I see there is the same problem also in\n> interactive_psql(). So I've attached the 0001 patch to fix these\n> documentation issues.\n\n-A default timeout of $PostgreSQL::Test::Utils::timeout_default is set up,\n-which can be modified later.\n+A default timeout of $PostgreSQL::Test::Utils::timeout_default is set up.\n\nSince it cannot be modified, I think we should just say \"A timeout of ..\" and\ncall it a default timeout. This obviously only matters for the backpatch since\nthe sentence is removed in 0002.\n\n> Which could be backpatched.\n\n+1\n\n>> With your patch the timeout still cannot be changed, but at least set during\n>> start which seems good enough until there are tests warranting more complexity.\n>> The docs should be corrected to reflect this in your patch.\n> \n> I've incorporated the comments except for the following one and\n> attached updated version of the rest patches:\n\nLGTM.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 28 Nov 2023 11:16:25 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Tue, Nov 28, 2023 at 7:16 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 28 Nov 2023, at 03:00, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Nov 27, 2023 at 10:40 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >>\n> >>> On 27 Nov 2023, at 14:06, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>\n> >>> Is it true that we can modify the timeout after creating\n> >>> BackgroundPsql object? If so, it seems we don't need to introduce the\n> >>> new timeout argument. But how?\n> >>\n> >> I can't remember if that's leftovers that incorrectly remains from an earlier\n> >> version of the BackgroundPsql work, or if it's a very bad explanation of\n> >> ->set_query_timer_restart(). The timeout will use the timeout_default value\n> >> and that cannot be overridden, it can only be reset per query.\n> >\n> > Thank you for confirming this. I see there is the same problem also in\n> > interactive_psql(). So I've attached the 0001 patch to fix these\n> > documentation issues.\n>\n> -A default timeout of $PostgreSQL::Test::Utils::timeout_default is set up,\n> -which can be modified later.\n> +A default timeout of $PostgreSQL::Test::Utils::timeout_default is set up.\n>\n> Since it cannot be modified, I think we should just say \"A timeout of ..\" and\n> call it a default timeout. This obviously only matters for the backpatch since\n> the sentence is removed in 0002.\n\nAgreed.\n\nI've attached new version patches (0002 and 0003 are unchanged except\nfor the commit message). I'll push them, barring any objections.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 29 Nov 2023 05:27:39 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Wed, Nov 29, 2023 at 5:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Nov 28, 2023 at 7:16 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >\n> > > On 28 Nov 2023, at 03:00, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Mon, Nov 27, 2023 at 10:40 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > >>\n> > >>> On 27 Nov 2023, at 14:06, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >>\n> > >>> Is it true that we can modify the timeout after creating\n> > >>> BackgroundPsql object? If so, it seems we don't need to introduce the\n> > >>> new timeout argument. But how?\n> > >>\n> > >> I can't remember if that's leftovers that incorrectly remains from an earlier\n> > >> version of the BackgroundPsql work, or if it's a very bad explanation of\n> > >> ->set_query_timer_restart(). The timeout will use the timeout_default value\n> > >> and that cannot be overridden, it can only be reset per query.\n> > >\n> > > Thank you for confirming this. I see there is the same problem also in\n> > > interactive_psql(). So I've attached the 0001 patch to fix these\n> > > documentation issues.\n> >\n> > -A default timeout of $PostgreSQL::Test::Utils::timeout_default is set up,\n> > -which can be modified later.\n> > +A default timeout of $PostgreSQL::Test::Utils::timeout_default is set up.\n> >\n> > Since it cannot be modified, I think we should just say \"A timeout of ..\" and\n> > call it a default timeout. This obviously only matters for the backpatch since\n> > the sentence is removed in 0002.\n>\n> Agreed.\n>\n> I've attached new version patches (0002 and 0003 are unchanged except\n> for the commit message). I'll push them, barring any objections.\n>\n\nPushed.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 30 Nov 2023 16:35:26 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Thu, Nov 30, 2023 at 4:35 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Nov 29, 2023 at 5:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Nov 28, 2023 at 7:16 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > >\n> > > > On 28 Nov 2023, at 03:00, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Mon, Nov 27, 2023 at 10:40 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > > >>\n> > > >>> On 27 Nov 2023, at 14:06, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >>\n> > > >>> Is it true that we can modify the timeout after creating\n> > > >>> BackgroundPsql object? If so, it seems we don't need to introduce the\n> > > >>> new timeout argument. But how?\n> > > >>\n> > > >> I can't remember if that's leftovers that incorrectly remains from an earlier\n> > > >> version of the BackgroundPsql work, or if it's a very bad explanation of\n> > > >> ->set_query_timer_restart(). The timeout will use the timeout_default value\n> > > >> and that cannot be overridden, it can only be reset per query.\n> > > >\n> > > > Thank you for confirming this. I see there is the same problem also in\n> > > > interactive_psql(). So I've attached the 0001 patch to fix these\n> > > > documentation issues.\n> > >\n> > > -A default timeout of $PostgreSQL::Test::Utils::timeout_default is set up,\n> > > -which can be modified later.\n> > > +A default timeout of $PostgreSQL::Test::Utils::timeout_default is set up.\n> > >\n> > > Since it cannot be modified, I think we should just say \"A timeout of ..\" and\n> > > call it a default timeout. This obviously only matters for the backpatch since\n> > > the sentence is removed in 0002.\n> >\n> > Agreed.\n> >\n> > I've attached new version patches (0002 and 0003 are unchanged except\n> > for the commit message). I'll push them, barring any objections.\n> >\n>\n> Pushed.\n\nFYI I've configured the buildfarm animal perentie to run regression\ntests including xid_wraparound:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=perentie&br=HEAD\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 1 Dec 2023 11:14:20 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "The way src/test/modules/xid_wraparound/meson.build is written, it \ninstalls the xid_wraparound.so module into production installations. \nFor test modules, a different installation code needs to be used. See \nneighboring test modules such as \nsrc/test/modules/test_rbtree/meson.build for examples.\n\n\n\n", "msg_date": "Wed, 7 Feb 2024 19:11:41 +0100", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Thu, Feb 8, 2024 at 3:11 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n>\n> The way src/test/modules/xid_wraparound/meson.build is written, it\n> installs the xid_wraparound.so module into production installations.\n> For test modules, a different installation code needs to be used. See\n> neighboring test modules such as\n> src/test/modules/test_rbtree/meson.build for examples.\n>\n\nGood catch, thanks.\n\nI've attached the patch to fix it. Does it make sense?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 8 Feb 2024 13:05:49 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On 08.02.24 05:05, Masahiko Sawada wrote:\n> On Thu, Feb 8, 2024 at 3:11 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n>>\n>> The way src/test/modules/xid_wraparound/meson.build is written, it\n>> installs the xid_wraparound.so module into production installations.\n>> For test modules, a different installation code needs to be used. See\n>> neighboring test modules such as\n>> src/test/modules/test_rbtree/meson.build for examples.\n>>\n> \n> Good catch, thanks.\n> \n> I've attached the patch to fix it. Does it make sense?\n\nYes, that looks correct to me and produces the expected behavior.\n\n\n\n", "msg_date": "Thu, 8 Feb 2024 08:06:28 +0100", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "On Thu, Feb 8, 2024 at 4:06 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n>\n> On 08.02.24 05:05, Masahiko Sawada wrote:\n> > On Thu, Feb 8, 2024 at 3:11 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> >>\n> >> The way src/test/modules/xid_wraparound/meson.build is written, it\n> >> installs the xid_wraparound.so module into production installations.\n> >> For test modules, a different installation code needs to be used. See\n> >> neighboring test modules such as\n> >> src/test/modules/test_rbtree/meson.build for examples.\n> >>\n> >\n> > Good catch, thanks.\n> >\n> > I've attached the patch to fix it. Does it make sense?\n>\n> Yes, that looks correct to me and produces the expected behavior.\n>\n\nThank you for the check. Pushed at 1aa67a5ea687.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 8 Feb 2024 17:06:39 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" }, { "msg_contents": "Hello,\n\n30.11.2023 10:35, Masahiko Sawada wrote:\n>\n>> I've attached new version patches (0002 and 0003 are unchanged except\n>> for the commit message). I'll push them, barring any objections.\n>>\n> Pushed.\n\nI've discovered that the test 001_emergency_vacuum.pl can fail due to a\nrace condition. I can't see the server log at [1], but I reproduced the\nfailure locally and with additional logging and log_min_messages = DEBUG3,\nthe log shows:\n...\n2024-05-22 11:46:28.125 UTC [21256:2853] DEBUG:  SlruScanDirectory invoking callback on pg_xact/0690\n2024-05-22 11:46:28.125 UTC [21256:2854] DEBUG:  transaction ID wrap limit is 2147484396, limited by database with OID 5\n2024-05-22 11:46:28.126 UTC [21256:2855] LOG: !!!SendPostmasterSignal| PMSIGNAL_START_AUTOVAC_LAUNCHER\n2024-05-22 11:46:28.135 UTC [14871:20077] DEBUG:  postmaster received pmsignal signal\n2024-05-22 11:46:28.137 UTC [21257:1] DEBUG:  autovacuum launcher started\n2024-05-22 11:46:28.137 UTC [21257:2] DEBUG:  InitPostgres\n2024-05-22 11:46:28.138 UTC [21257:3] LOG:  !!!AutoVacLauncherMain| !AutoVacuumingActive() && !ShutdownRequestPending; \nbefore do_start_worker()\n2024-05-22 11:46:28.138 UTC [21257:4] LOG:  !!!do_start_worker| return quickly when there are no free workers\n2024-05-22 11:46:28.138 UTC [21257:5] DEBUG:  shmem_exit(0): 4 before_shmem_exit callbacks to make\n2024-05-22 11:46:28.138 UTC [21257:6] DEBUG:  shmem_exit(0): 6 on_shmem_exit callbacks to make\n2024-05-22 11:46:28.138 UTC [21257:7] DEBUG:  proc_exit(0): 1 callbacks to make\n2024-05-22 11:46:28.138 UTC [21257:8] DEBUG:  exit(0)\n2024-05-22 11:46:28.138 UTC [21257:9] DEBUG:  shmem_exit(-1): 0 before_shmem_exit callbacks to make\n2024-05-22 11:46:28.138 UTC [21257:10] DEBUG:  shmem_exit(-1): 0 on_shmem_exit callbacks to make\n2024-05-22 11:46:28.138 UTC [21257:11] DEBUG:  proc_exit(-1): 0 callbacks to make\n2024-05-22 11:46:28.146 UTC [21256:2856] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 5\n2024-05-22 11:46:28.146 UTC [21256:2857] DEBUG:  MultiXact member stop limit is now 4294914944 based on MultiXact 1\n2024-05-22 11:46:28.146 UTC [21256:2858] DEBUG:  shmem_exit(0): 4 before_shmem_exit callbacks to make\n2024-05-22 11:46:28.147 UTC [21256:2859] DEBUG:  shmem_exit(0): 7 on_shmem_exit callbacks to make\n2024-05-22 11:46:28.147 UTC [21256:2860] DEBUG:  proc_exit(0): 1 callbacks to make\n2024-05-22 11:46:28.147 UTC [21256:2861] DEBUG:  exit(0)\n2024-05-22 11:46:28.147 UTC [21256:2862] DEBUG:  shmem_exit(-1): 0 before_shmem_exit callbacks to make\n2024-05-22 11:46:28.147 UTC [21256:2863] DEBUG:  shmem_exit(-1): 0 on_shmem_exit callbacks to make\n2024-05-22 11:46:28.147 UTC [21256:2864] DEBUG:  proc_exit(-1): 0 callbacks to make\n2024-05-22 11:46:28.151 UTC [14871:20078] DEBUG:  forked new backend, pid=21258 socket=8\n2024-05-22 11:46:28.171 UTC [14871:20079] DEBUG:  server process (PID 21256) exited with exit code 0\n2024-05-22 11:46:28.152 UTC [21258:1] [unknown] LOG:  connection received: host=[local]\n2024-05-22 11:46:28.171 UTC [21258:2] [unknown] DEBUG:  InitPostgres\n2024-05-22 11:46:28.172 UTC [21258:3] [unknown] LOG:  connection authenticated: user=\"vagrant\" method=trust \n(/pgtest/postgresql.git/src/test/modules/xid_wraparound/tmp_check/t_001_emergency_vacuum_main_data/pgdata/pg_hba.conf:117)\n2024-05-22 11:46:28.172 UTC [21258:4] [unknown] LOG:  connection authorized: user=vagrant database=postgres \napplication_name=001_emergency_vacuum.pl\n2024-05-22 11:46:28.175 UTC [21258:5] 001_emergency_vacuum.pl LOG: statement: INSERT INTO small(data) SELECT 1\n\nThat is, autovacuum worker (21256) sent PMSIGNAL_START_AUTOVAC_LAUNCHER,\npostmaster started autovacuum launcher, which could not start new\nautovacuum worker due to the process 21256 not exited yet.\n\nThe failure can be reproduced easily with the sleep added inside\nSetTransactionIdLimit():\n         if (TransactionIdFollowsOrEquals(curXid, xidVacLimit) &&\n                 IsUnderPostmaster && !InRecovery)\nSendPostmasterSignal(PMSIGNAL_START_AUTOVAC_LAUNCHER);\n+pg_usleep(10000L);\n\nBy the way I also discovered that rather resource-intensive xid_wraparound\ntests executed twice during the buildfarm \"make\" run (on dodo and perentie\n(see [2]) animals), at stage module-xid_wraparound-check and then at stage\ntestmodules-install-check-C.\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-05-19%2006%3A33%3A34\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=perentie&dt=2024-05-22%2000%3A02%3A19\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Wed, 22 May 2024 15:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Testing autovacuum wraparound (including failsafe)" } ]
[ { "msg_contents": "When compute_query_id is not enabled (this is the default setting),\npg_stat_statements doesn't track any statements. This means that\nwe will see no entries in pg_stat_statements by default. I'm afraid that\nusers may easily forget to enable compute_query_id\nwhen using pg_stat_statements (because this setting was not necessary\nin v13 or before), and finally may have noticed the mis-configuration\nand failure of statements tracking after many queries were executed.\nFor example, we already have one report about this issue, in [1].\n\nShouldn't we do something so that users can avoid such mis-configuration?\n\nOne idea is to change the default value of compute_query_id from false to true.\nIf enabling compute_query_id doesn't incur any performance penalty,\nIMO this idea is very simple and enough.\n\nAnother idea is to change pg_stat_statements so that it emits an error\nat the server startup (i.e., prevents the server from starting up)\nif compute_query_id is not enabled. In this case, users can easily notice\nthe mis-configuration from the error message in the server log,\nenable compute_query_id, and then restart the server.\n\nIMO the former is better if there is no such performance risk. Otherwise\nwe should adopt the latter approach. Or you have the better idea?\n\nThought?\n\n[1]\nhttps://postgr.es/m/1953aec168224b95b0c962a622bef0794da6ff40.camel@moonset.ru\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Sat, 24 Apr 2021 23:54:25 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "compute_query_id and pg_stat_statements" }, { "msg_contents": "Le sam. 24 avr. 2021 à 22:54, Fujii Masao <masao.fujii@oss.nttdata.com> a\nécrit :\n\n> For example, we already have one report about this issue, in [1].\n>\n\nthis report was only a few days after the patch changing the behavior was\ncommitted, unless you've been following the original thread (which has been\ngoing on for 2 years), that's kind of expected. release notes for pg14\nshould highlight that change, so hopefully people upgrading will see it.\nI'll also try to write some blog article about it to add more warnings.\n\nShouldn't we do something so that users can avoid such mis-configuration?\n>\n\n> One idea is to change the default value of compute_query_id from false to\n> true.\n> If enabling compute_query_id doesn't incur any performance penalty,\n> IMO this idea is very simple and enough.\n>\n\nit adds some noticeable overhead in oltp style workloads. I think that I\ndid some benchmarks in the original thread, and we decided not to enable it\nby default\n\nAnother idea is to change pg_stat_statements so that it emits an error\n> at the server startup (i.e., prevents the server from starting up)\n> if compute_query_id is not enabled. In this case, users can easily notice\n> the mis-configuration from the error message in the server log,\n> enable compute_query_id, and then restart the server.\n>\n\nthat's also not an option, as one can now use pg_stat_statetements with a\ndifferent queryid calculation. see for instance\nhttps://github.com/rjuju/pg_queryid for a proof a concept extension for\nthat. I think it's clear that multiple people will want to use a different\ncalculation as they have been asking for that for years.\n\nIMO the former is better if there is no such performance risk. Otherwise\n> we should adopt the latter approach. Or you have the better idea?\n>\n\nI'm not sure how to address that, as temporarily disabling queryId\ncalculation should be allowed. maybe we could raise a warning once per\nbackend if pgss sees a dml query without queryId? but it could end up\ncreating more problems than it solves.\n\nfor the record people have also raised bugs on the powa project because\nplanning counters are not tracked by default, so compute_query_id will\nprobably add a bit of traffic.\n\n>\n\nLe sam. 24 avr. 2021 à 22:54, Fujii Masao <masao.fujii@oss.nttdata.com> a écrit :\nFor example, we already have one report about this issue, in [1].this report was only a few days after the patch changing the behavior was committed, unless you've been following the original thread (which has been going on for 2 years), that's kind of expected. release notes for pg14 should highlight that change, so hopefully people upgrading will see it. I'll also try to write some blog article about it to add more warnings. Shouldn't we do something so that users can avoid such mis-configuration?\n\nOne idea is to change the default value of compute_query_id from false to true.\nIf enabling compute_query_id doesn't incur any performance penalty,\nIMO this idea is very simple and enough.it adds some noticeable overhead in oltp style workloads. I think that I did some benchmarks in the original thread, and we decided not to enable it by default Another idea is to change pg_stat_statements so that it emits an error\nat the server startup (i.e., prevents the server from starting up)\nif compute_query_id is not enabled. In this case, users can easily notice\nthe mis-configuration from the error message in the server log,\nenable compute_query_id, and then restart the server.that's also not an option, as one can now use pg_stat_statetements with a different queryid calculation. see for instance https://github.com/rjuju/pg_queryid for a proof a concept extension for that. I think it's clear that multiple people will want to use a different calculation as they have been asking for that for years. IMO the former is better if there is no such performance risk. Otherwise\nwe should adopt the latter approach. Or you have the better idea?I'm not sure how to address that, as temporarily disabling queryId calculation should be allowed. maybe we could raise a warning once per backend if pgss sees a dml query without queryId? but it could end up creating more problems than it solves. for the record people have also raised bugs on the powa project because planning counters are not tracked by default, so compute_query_id will probably add a bit of traffic.", "msg_date": "Sat, 24 Apr 2021 23:17:23 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Sat, Apr 24, 2021 at 11:54:25PM +0900, Fujii Masao wrote:\n> When compute_query_id is not enabled (this is the default setting),\n> pg_stat_statements doesn't track any statements. This means that\n> we will see no entries in pg_stat_statements by default. I'm afraid that\n> users may easily forget to enable compute_query_id\n> when using pg_stat_statements (because this setting was not necessary\n> in v13 or before), and finally may have noticed the mis-configuration\n> and failure of statements tracking after many queries were executed.\n> For example, we already have one report about this issue, in [1].\n> \n> Shouldn't we do something so that users can avoid such mis-configuration?\n> \n> One idea is to change the default value of compute_query_id from false to true.\n> If enabling compute_query_id doesn't incur any performance penalty,\n> IMO this idea is very simple and enough.\n\nI think the query overhead was too high (2%) to enable it by default:\n\n\thttps://www.postgresql.org/message-id/20201016160355.GA31474@alvherre.pgsql\n\n> Another idea is to change pg_stat_statements so that it emits an error\n> at the server startup (i.e., prevents the server from starting up)\n> if compute_query_id is not enabled. In this case, users can easily notice\n> the mis-configuration from the error message in the server log,\n> enable compute_query_id, and then restart the server.\n\nI think it throws an error in the server logs, but preventing server\nstart seems extreme. Also, compute_query_id is PGC_SUSET, meaning it\ncan be changed by the super-user, so you could enable compute_query_id\nwithout a server restart, which makes failing on start kind of odd.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Sat, 24 Apr 2021 11:22:20 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Sat, Apr 24, 2021 at 5:22 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Sat, Apr 24, 2021 at 11:54:25PM +0900, Fujii Masao wrote:\n> > When compute_query_id is not enabled (this is the default setting),\n> > pg_stat_statements doesn't track any statements. This means that\n> > we will see no entries in pg_stat_statements by default. I'm afraid that\n> > users may easily forget to enable compute_query_id\n> > when using pg_stat_statements (because this setting was not necessary\n> > in v13 or before), and finally may have noticed the mis-configuration\n> > and failure of statements tracking after many queries were executed.\n> > For example, we already have one report about this issue, in [1].\n> >\n> > Shouldn't we do something so that users can avoid such mis-configuration?\n> >\n> > One idea is to change the default value of compute_query_id from false to true.\n> > If enabling compute_query_id doesn't incur any performance penalty,\n> > IMO this idea is very simple and enough.\n>\n> I think the query overhead was too high (2%) to enable it by default:\n>\n> https://www.postgresql.org/message-id/20201016160355.GA31474@alvherre.pgsql\n\nPersonally I'd say 2% is not too high to turn it on by default, as it\ngoes down when you move past trivial queries, which is what most\npeople do. And since you can easily turn it off.\n\n\n> > Another idea is to change pg_stat_statements so that it emits an error\n> > at the server startup (i.e., prevents the server from starting up)\n> > if compute_query_id is not enabled. In this case, users can easily notice\n> > the mis-configuration from the error message in the server log,\n> > enable compute_query_id, and then restart the server.\n>\n> I think it throws an error in the server logs, but preventing server\n> start seems extreme. Also, compute_query_id is PGC_SUSET, meaning it\n> can be changed by the super-user, so you could enable compute_query_id\n> without a server restart, which makes failing on start kind of odd.\n\nHow about turning it into an enum instead of a boolean, that can be:\n\noff = always off\nauto = pg_stat_statments turns it on when it's loaded in\nshared_preload_libraries. Other extensions using it can do that to.\nBut it remains off if you haven't installed any *extension* that needs\nit\non = always on (if you want it in pg_stat_activity regardless of extensions)\n\nThe default would be \"auto\", which means that pg_stat_statements would\nwork as expected, but those who haven't installed it (or another\nextension that changes it) would not have to pay the overhead.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Sat, 24 Apr 2021 18:48:53 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Sat, Apr 24, 2021 at 06:48:53PM +0200, Magnus Hagander wrote:\n> > I think the query overhead was too high (2%) to enable it by default:\n> >\n> > https://www.postgresql.org/message-id/20201016160355.GA31474@alvherre.pgsql\n> \n> Personally I'd say 2% is not too high to turn it on by default, as it\n> goes down when you move past trivial queries, which is what most\n> people do. And since you can easily turn it off.\n\nWe would do a lot of work to reduce overhead by 2% on every query, and\nto add 2% for a hash that previously was only used by pg_stat_statements\nseems unwise.\n\n> How about turning it into an enum instead of a boolean, that can be:\n> \n> off = always off\n> auto = pg_stat_statments turns it on when it's loaded in\n> shared_preload_libraries. Other extensions using it can do that to.\n> But it remains off if you haven't installed any *extension* that needs\n> it\n> on = always on (if you want it in pg_stat_activity regardless of extensions)\n> \n> The default would be \"auto\", which means that pg_stat_statements would\n> work as expected, but those who haven't installed it (or another\n> extension that changes it) would not have to pay the overhead.\n\nThat's a pretty weird API. I think we just need people to turn it on\nlike they are doing when the configure pg_stat_statements anyway. \npg_stat_statements already requires configuration anyway.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Sat, 24 Apr 2021 13:09:08 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> That's a pretty weird API. I think we just need people to turn it on\n> like they are doing when the configure pg_stat_statements anyway. \n> pg_stat_statements already requires configuration anyway.\n\nAgreed. If pg_stat_statements were zero-configuration today then\nthis would be an annoying new burden, but it isn't.\n\nI haven't looked, but did we put anything into pg_stat_statements\nto make it easy to tell if you've messed up this setting?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 24 Apr 2021 13:43:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Sat, Apr 24, 2021 at 01:43:51PM -0400, Tom Lane wrote:\n> \n> I haven't looked, but did we put anything into pg_stat_statements\n> to make it easy to tell if you've messed up this setting?\n\nYou mean apart from from having pg_stat_statements' view/SRFs returning\nnothing?\n\nI think it's a reasonable use case to sometime disable query_id calculation,\neg. if you know that it will only lead to useless bloat in the entry and that\nyou won't need the info, so spamming warnings if there are no queryid could\ncause some pain.\n\n\n", "msg_date": "Sun, 25 Apr 2021 16:22:51 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Sat, Apr 24, 2021 at 01:43:51PM -0400, Tom Lane wrote:\n>> I haven't looked, but did we put anything into pg_stat_statements\n>> to make it easy to tell if you've messed up this setting?\n\n> You mean apart from from having pg_stat_statements' view/SRFs returning\n> nothing?\n\n> I think it's a reasonable use case to sometime disable query_id calculation,\n> eg. if you know that it will only lead to useless bloat in the entry and that\n> you won't need the info, so spamming warnings if there are no queryid could\n> cause some pain.\n\nI agree repeated warnings would be bad news. I was wondering if we could\narrange a single warning at the time pg_stat_statements is preloaded into\nthe postmaster. In this way, if you tried to use a configuration file\nthat used to work, you'd hopefully get some notice about why it no longer\ndoes what you want. Also, if you are preloading pg_stat_statements, it\nseems reasonable to assume that you'd like the global value of the flag\nto be \"on\", even if there are use-cases for transiently disabling it.\n\nI think the way to detect \"being loaded into the postmaster\" is\n\tif (IsPostmasterEnvironment && !IsUnderPostmaster)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 25 Apr 2021 11:39:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Sun, Apr 25, 2021 at 11:39:55AM -0400, Tom Lane wrote:\n> \n> I agree repeated warnings would be bad news. I was wondering if we could\n> arrange a single warning at the time pg_stat_statements is preloaded into\n> the postmaster. In this way, if you tried to use a configuration file\n> that used to work, you'd hopefully get some notice about why it no longer\n> does what you want. Also, if you are preloading pg_stat_statements, it\n> seems reasonable to assume that you'd like the global value of the flag\n> to be \"on\", even if there are use-cases for transiently disabling it.\n\nWhat about people who wants to use pg_stat_statements but are not ok with our\nquery_id heuristics and use a third-party plugin for that?\n\n\n", "msg_date": "Mon, 26 Apr 2021 00:17:42 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Sun, Apr 25, 2021 at 11:39:55AM -0400, Tom Lane wrote:\n>> I agree repeated warnings would be bad news. I was wondering if we could\n>> arrange a single warning at the time pg_stat_statements is preloaded into\n>> the postmaster. In this way, if you tried to use a configuration file\n>> that used to work, you'd hopefully get some notice about why it no longer\n>> does what you want. Also, if you are preloading pg_stat_statements, it\n>> seems reasonable to assume that you'd like the global value of the flag\n>> to be \"on\", even if there are use-cases for transiently disabling it.\n\n> What about people who wants to use pg_stat_statements but are not ok with our\n> query_id heuristics and use a third-party plugin for that?\n\nThey're still going to want the GUC set to something other than \"off\",\nno? In any case it's just a one-time log message, so it's not likely\nto be *that* annoying.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 25 Apr 2021 13:17:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Sun, Apr 25, 2021 at 01:17:03PM -0400, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Sun, Apr 25, 2021 at 11:39:55AM -0400, Tom Lane wrote:\n> >> I agree repeated warnings would be bad news. I was wondering if we could\n> >> arrange a single warning at the time pg_stat_statements is preloaded into\n> >> the postmaster. In this way, if you tried to use a configuration file\n> >> that used to work, you'd hopefully get some notice about why it no longer\n> >> does what you want. Also, if you are preloading pg_stat_statements, it\n> >> seems reasonable to assume that you'd like the global value of the flag\n> >> to be \"on\", even if there are use-cases for transiently disabling it.\n> \n> > What about people who wants to use pg_stat_statements but are not ok with our\n> > query_id heuristics and use a third-party plugin for that?\n> \n> They're still going to want the GUC set to something other than \"off\",\n> no?\n\nThey will want compute_query_id to be off. And they actually will *need* that,\nas we recommend third-party plugins computing alternative query_id to error out\nif they see a that a query_id has already been generated, to avoid any problem\nif compute_query_id is being temporarily toggled. That's what I did in the POC\nplugin for external query_id at [1].\n\n> In any case it's just a one-time log message, so it's not likely\n> to be *that* annoying.\n\nIn that case it should be phrased in a way that makes it clear that\npg_stat_statements can work without enabling compute_query_id, something like:\n\n\"compute_query_id is disabled. This module won't track any activity unless you\nconfigured a third-party extension that computes query identifiers\"\n\n[1] https://github.com/rjuju/pg_queryid/blob/master/pg_queryid.c#L172\n\n\n", "msg_date": "Mon, 26 Apr 2021 01:32:06 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On 24.04.21 19:43, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n>> That's a pretty weird API. I think we just need people to turn it on\n>> like they are doing when the configure pg_stat_statements anyway.\n>> pg_stat_statements already requires configuration anyway.\n> \n> Agreed. If pg_stat_statements were zero-configuration today then\n> this would be an annoying new burden, but it isn't.\n\nI think people can understand \"add pg_stat_statements to \nshared_preload_libraries\" and \"install the extension\". You have to turn \nit on somehow after all.\n\nNow there is the additional burden of turning on this weird setting that \nno one understands. That's a 50% increase in burden.\n\nAnd almost no one will want to use a nondefault setting.\n\npg_stat_statements is pretty popular. I think leaving in this \nrequirement will lead to widespread confusion and complaints.\n\n\n", "msg_date": "Mon, 26 Apr 2021 16:46:12 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Re: Peter Eisentraut\n> > Agreed. If pg_stat_statements were zero-configuration today then\n> > this would be an annoying new burden, but it isn't.\n> \n> I think people can understand \"add pg_stat_statements to\n> shared_preload_libraries\" and \"install the extension\". You have to turn it\n> on somehow after all.\n\nFwiw, I'd claim that pg_stat_statements *is* zero-configuration today.\nYou just have to load the module (= shared_preload_libraries), and it\nwill start working. Later you can run CREATE EXTENSION to actually see\nthe stats, but they are already being collected in the background.\n\n> Now there is the additional burden of turning on this weird setting that no\n> one understands. That's a 50% increase in burden.\n> \n> And almost no one will want to use a nondefault setting.\n> \n> pg_stat_statements is pretty popular. I think leaving in this requirement\n> will lead to widespread confusion and complaints.\n\nAck, please make the default config (i.e. after setting shared_preload_libraries)\ndo something sensible. Having to enable some \"weird\" internal other setting\nwill be very hard to explain to users.\n\nFwiw, I'd even want to have pg_stat_statements enabled in core by\ndefault. That would awesome UX. (And turning off could be as simple as\nsetting compute_query_id=off.)\n\nChristoph\n\n\n", "msg_date": "Mon, 26 Apr 2021 17:34:30 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Mon, Apr 26, 2021 at 05:34:30PM +0200, Christoph Berg wrote:\n> Re: Peter Eisentraut\n> > > Agreed. If pg_stat_statements were zero-configuration today then\n> > > this would be an annoying new burden, but it isn't.\n> > \n> > I think people can understand \"add pg_stat_statements to\n> > shared_preload_libraries\" and \"install the extension\". You have to turn it\n> > on somehow after all.\n> \n> Fwiw, I'd claim that pg_stat_statements *is* zero-configuration today.\n> You just have to load the module (= shared_preload_libraries), and it\n> will start working. Later you can run CREATE EXTENSION to actually see\n> the stats, but they are already being collected in the background.\n> \n> > Now there is the additional burden of turning on this weird setting that no\n> > one understands. That's a 50% increase in burden.\n> > \n> > And almost no one will want to use a nondefault setting.\n> > \n> > pg_stat_statements is pretty popular. I think leaving in this requirement\n> > will lead to widespread confusion and complaints.\n> \n> Ack, please make the default config (i.e. after setting shared_preload_libraries)\n> do something sensible. Having to enable some \"weird\" internal other setting\n> will be very hard to explain to users.\n> \n> Fwiw, I'd even want to have pg_stat_statements enabled in core by\n> default. That would awesome UX. (And turning off could be as simple as\n> setting compute_query_id=off.)\n\nTechically, pg_stat_statements can turn on compute_query_id when it is\nloaded, even if it is 'off' in postgresql.conf, right? And\npg_stat_statements would know if an alternate hash method is being used,\nright?\n\nThis is closer to Magnus's idea of having a three-value\ncompute_query_id, except is it more controlled by pg_stat_statements. \nAnother idea would be to throw a user-visible warning if the\npg_stat_statements extension is loaded and compute_query_id is off.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 26 Apr 2021 12:21:04 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Mon, Apr 26, 2021 at 05:34:30PM +0200, Christoph Berg wrote:\n> > Re: Peter Eisentraut\n> > > > Agreed. If pg_stat_statements were zero-configuration today then\n> > > > this would be an annoying new burden, but it isn't.\n> > > \n> > > I think people can understand \"add pg_stat_statements to\n> > > shared_preload_libraries\" and \"install the extension\". You have to turn it\n> > > on somehow after all.\n> > \n> > Fwiw, I'd claim that pg_stat_statements *is* zero-configuration today.\n> > You just have to load the module (= shared_preload_libraries), and it\n> > will start working. Later you can run CREATE EXTENSION to actually see\n> > the stats, but they are already being collected in the background.\n> > \n> > > Now there is the additional burden of turning on this weird setting that no\n> > > one understands. That's a 50% increase in burden.\n> > > \n> > > And almost no one will want to use a nondefault setting.\n> > > \n> > > pg_stat_statements is pretty popular. I think leaving in this requirement\n> > > will lead to widespread confusion and complaints.\n> > \n> > Ack, please make the default config (i.e. after setting shared_preload_libraries)\n> > do something sensible. Having to enable some \"weird\" internal other setting\n> > will be very hard to explain to users.\n> > \n> > Fwiw, I'd even want to have pg_stat_statements enabled in core by\n> > default. That would awesome UX. (And turning off could be as simple as\n> > setting compute_query_id=off.)\n> \n> Techically, pg_stat_statements can turn on compute_query_id when it is\n> loaded, even if it is 'off' in postgresql.conf, right? And\n> pg_stat_statements would know if an alternate hash method is being used,\n> right?\n\n+1 on this approach. I agree that we should avoid having to make every\nnew user and every user who is upgrading with pg_stat_statements\ninstalled have to go twiddle this parameter.\n\nThanks,\n\nStephen", "msg_date": "Mon, 26 Apr 2021 12:31:35 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Bruce Momjian (bruce@momjian.us) wrote:\n>> Techically, pg_stat_statements can turn on compute_query_id when it is\n>> loaded, even if it is 'off' in postgresql.conf, right? And\n>> pg_stat_statements would know if an alternate hash method is being used,\n>> right?\n\n> +1 on this approach.\n\nThat'd make it impossible to turn off or adjust afterwards, wouldn't it?\nI'm afraid the confusion stemming from that would outweigh any simplicity.\n\nI would be in favor of logging a message at startup to the effect of\n\"this is misconfigured\" (as per upthread discussion), although whether\npeople would see that is uncertain.\n\nIn the end, it's not like this is the first time we've ever made an\nincompatible change in configuration needs; and it won't be the last\neither. I don't buy the argument that pg_stat_statements users can't\ncope with adding the additional setting. (Of course, we should be\ncareful to call it out as an incompatible change in the release notes.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Apr 2021 12:56:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Mon, Apr 26, 2021 at 12:56:13PM -0400, Tom Lane wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Bruce Momjian (bruce@momjian.us) wrote:\n> >> Techically, pg_stat_statements can turn on compute_query_id when it is\n> >> loaded, even if it is 'off' in postgresql.conf, right? And\n> >> pg_stat_statements would know if an alternate hash method is being used,\n> >> right?\n> \n> > +1 on this approach.\n> \n> That'd make it impossible to turn off or adjust afterwards, wouldn't it?\n> I'm afraid the confusion stemming from that would outweigh any simplicity.\n> \n> I would be in favor of logging a message at startup to the effect of\n> \"this is misconfigured\" (as per upthread discussion), although whether\n> people would see that is uncertain.\n\nI think a user-visible warning at CREATE EXNTENSION would help too.\n\n> In the end, it's not like this is the first time we've ever made an\n> incompatible change in configuration needs; and it won't be the last\n> either. I don't buy the argument that pg_stat_statements users can't\n> cope with adding the additional setting. (Of course, we should be\n> careful to call it out as an incompatible change in the release notes.)\n\nAgreed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 26 Apr 2021 13:00:21 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Mon, Apr 26, 2021 at 6:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Bruce Momjian (bruce@momjian.us) wrote:\n> >> Techically, pg_stat_statements can turn on compute_query_id when it is\n> >> loaded, even if it is 'off' in postgresql.conf, right? And\n> >> pg_stat_statements would know if an alternate hash method is being used,\n> >> right?\n>\n> > +1 on this approach.\n>\n> That'd make it impossible to turn off or adjust afterwards, wouldn't it?\n> I'm afraid the confusion stemming from that would outweigh any simplicity.\n\nThatäs why I suggested the three value one. Default to a mode where\nit's automatic, which is what the majority is going to want, but have\na way to explicitly turn it on.\n\n\n> I would be in favor of logging a message at startup to the effect of\n> \"this is misconfigured\" (as per upthread discussion), although whether\n> people would see that is uncertain.\n\nSome people would. Many wouldn't, and sadly many hours would be spent\non debugging things before they got there -- based on experience of\nhow many people actually read the logs..\n\n> In the end, it's not like this is the first time we've ever made an\n> incompatible change in configuration needs; and it won't be the last\n> either. I don't buy the argument that pg_stat_statements users can't\n> cope with adding the additional setting. (Of course, we should be\n> careful to call it out as an incompatible change in the release notes.)\n\nThe fact that we've made changes before that complicated our users\nexperience isn't in itself an argument for doing it again though...\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Mon, 26 Apr 2021 19:04:32 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Tue, Apr 27, 2021 at 12:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Bruce Momjian (bruce@momjian.us) wrote:\n> >> Techically, pg_stat_statements can turn on compute_query_id when it is\n> >> loaded, even if it is 'off' in postgresql.conf, right? And\n> >> pg_stat_statements would know if an alternate hash method is being used,\n> >> right?\n>\n> > +1 on this approach.\n>\n> That'd make it impossible to turn off or adjust afterwards, wouldn't it?\n\nI think so, which would also make it impossible to use an external\nquery_id plugin.\n\nEnabling compute_query_id by default or raising a WARNING in\npg_stat_statements' PG_INIT seems like the only 2 sensible options.\n\n\n", "msg_date": "Tue, 27 Apr 2021 01:04:49 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Mon, Apr 26, 2021 at 7:00 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Mon, Apr 26, 2021 at 12:56:13PM -0400, Tom Lane wrote:\n> > Stephen Frost <sfrost@snowman.net> writes:\n> > > * Bruce Momjian (bruce@momjian.us) wrote:\n> > >> Techically, pg_stat_statements can turn on compute_query_id when it is\n> > >> loaded, even if it is 'off' in postgresql.conf, right? And\n> > >> pg_stat_statements would know if an alternate hash method is being used,\n> > >> right?\n> >\n> > > +1 on this approach.\n> >\n> > That'd make it impossible to turn off or adjust afterwards, wouldn't it?\n> > I'm afraid the confusion stemming from that would outweigh any simplicity.\n> >\n> > I would be in favor of logging a message at startup to the effect of\n> > \"this is misconfigured\" (as per upthread discussion), although whether\n> > people would see that is uncertain.\n>\n> I think a user-visible warning at CREATE EXNTENSION would help too.\n\nIt would help a bit, but actually logging it would probably help more.\nMost people don't run the CREATE EXTENSION commands manually, it's all\ndone as part of either system install scripts or of application\nmigrations.\n\nBut that doesn't mean it wouldn't be useful to do it for those that\n*do* run things manually, it just wouldn't be sufficient in itself.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Mon, 26 Apr 2021 19:06:01 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Tue, Apr 27, 2021 at 1:04 AM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> Thatäs why I suggested the three value one. Default to a mode where\n> it's automatic, which is what the majority is going to want, but have\n> a way to explicitly turn it on.\n\nAgreed, that also sounds like a sensible default.\n\n\n", "msg_date": "Tue, 27 Apr 2021 01:14:04 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Greetings,\n\n* Magnus Hagander (magnus@hagander.net) wrote:\n> On Mon, Apr 26, 2021 at 6:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Stephen Frost <sfrost@snowman.net> writes:\n> > > * Bruce Momjian (bruce@momjian.us) wrote:\n> > >> Techically, pg_stat_statements can turn on compute_query_id when it is\n> > >> loaded, even if it is 'off' in postgresql.conf, right? And\n> > >> pg_stat_statements would know if an alternate hash method is being used,\n> > >> right?\n> >\n> > > +1 on this approach.\n> >\n> > That'd make it impossible to turn off or adjust afterwards, wouldn't it?\n> > I'm afraid the confusion stemming from that would outweigh any simplicity.\n\nI don't know that it actually would, but ...\n\n> Thatäs why I suggested the three value one. Default to a mode where\n> it's automatic, which is what the majority is going to want, but have\n> a way to explicitly turn it on.\n\nThis is certainly fine with me too, though it seems a bit surprising to\nme that we couldn't just figure out what the user actually wants based\non what's installed/running for any given combination.\n\n> > In the end, it's not like this is the first time we've ever made an\n> > incompatible change in configuration needs; and it won't be the last\n> > either. I don't buy the argument that pg_stat_statements users can't\n> > cope with adding the additional setting. (Of course, we should be\n> > careful to call it out as an incompatible change in the release notes.)\n> \n> The fact that we've made changes before that complicated our users\n> experience isn't in itself an argument for doing it again though...\n\nI'm generally a proponent of sensible changes across major versions even\nif it means that the user has to adjust things, but this seems like a\ncase where we're punting on something that we really should just be able\nto figure out the right answer to and that seems like a step backwards.\n\nThanks,\n\nStephen", "msg_date": "Mon, 26 Apr 2021 13:23:54 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Magnus Hagander (magnus@hagander.net) wrote:\n>> Thatäs why I suggested the three value one. Default to a mode where\n>> it's automatic, which is what the majority is going to want, but have\n>> a way to explicitly turn it on.\n\n> This is certainly fine with me too, though it seems a bit surprising to\n> me that we couldn't just figure out what the user actually wants based\n> on what's installed/running for any given combination.\n\nI'd be on board with having pg_stat_statement's pg_init function do\nsomething to adjust the setting, if we can figure out how to do that\nin a way that's not confusing in itself. I'm not sure though that\nthe GUC engine offers a good way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Apr 2021 13:29:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On 2021-Apr-26, Tom Lane wrote:\n\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Magnus Hagander (magnus@hagander.net) wrote:\n> >> That�s why I suggested the three value one. Default to a mode where\n> >> it's automatic, which is what the majority is going to want, but have\n> >> a way to explicitly turn it on.\n> \n> > This is certainly fine with me too, though it seems a bit surprising to\n> > me that we couldn't just figure out what the user actually wants based\n> > on what's installed/running for any given combination.\n> \n> I'd be on board with having pg_stat_statement's pg_init function do\n> something to adjust the setting, if we can figure out how to do that\n> in a way that's not confusing in itself. I'm not sure though that\n> the GUC engine offers a good way.\n\nI think it's straightforward, if we decouple the tri-valued enum used\nfor guc.c purposes from a separate boolean that actually enables the\nfeature. GUC sets the boolean to \"off\" initially when it sees the enum\nas \"auto\", and then pg_stat_statement's _PG_init modifies it during its\nown startup as needed.\n\nSo the user can turn the GUC off, and then pg_stat_statement does\nnothing and there is no performance drawback; or leave it \"auto\" and\nit'll only compute query_id if pg_stat_statement is loaded; or leave it\non if they want the query_id for other purposes.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"El miedo atento y previsor es la madre de la seguridad\" (E. Burke)\n\n\n", "msg_date": "Mon, 26 Apr 2021 13:43:31 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Magnus Hagander (magnus@hagander.net) wrote:\n> >> Thatäs why I suggested the three value one. Default to a mode where\n> >> it's automatic, which is what the majority is going to want, but have\n> >> a way to explicitly turn it on.\n> \n> > This is certainly fine with me too, though it seems a bit surprising to\n> > me that we couldn't just figure out what the user actually wants based\n> > on what's installed/running for any given combination.\n> \n> I'd be on board with having pg_stat_statement's pg_init function do\n> something to adjust the setting, if we can figure out how to do that\n> in a way that's not confusing in itself. I'm not sure though that\n> the GUC engine offers a good way.\n\nBoth of the extensions are getting loaded via pg_stat_statements and\nboth can have pg_init functions which work together to come up with the\nright answer, no?\n\nThat is- can't pg_stat_statements, when it's loaded, enable\ncompute_query_id if it's not already enabled, and then the pg_queryid\nmodule simply disable it when it gets loaded in it's pg_init()? Telling\npeople who are using pg_queryid to have it loaded *after*\npg_stat_statements certainly seems reasonable to me, but if folks don't\nlike that then maybe have a tri-state which is 'auto', 'on', and 'off',\nwhere pg_stat_statements would set it to 'on' if it's set to 'auto', but\nnot do anything if it starts as 'off'. pg_queryid would then set it to\n'off' when it's loaded and it wouldn't matter if it's loaded before or\nafter.\n\nThanks,\n\nStephen", "msg_date": "Mon, 26 Apr 2021 13:43:32 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Greetings,\n\n* Alvaro Herrera (alvherre@alvh.no-ip.org) wrote:\n> On 2021-Apr-26, Tom Lane wrote:\n> \n> > Stephen Frost <sfrost@snowman.net> writes:\n> > > * Magnus Hagander (magnus@hagander.net) wrote:\n> > >> Thatäs why I suggested the three value one. Default to a mode where\n> > >> it's automatic, which is what the majority is going to want, but have\n> > >> a way to explicitly turn it on.\n> > \n> > > This is certainly fine with me too, though it seems a bit surprising to\n> > > me that we couldn't just figure out what the user actually wants based\n> > > on what's installed/running for any given combination.\n> > \n> > I'd be on board with having pg_stat_statement's pg_init function do\n> > something to adjust the setting, if we can figure out how to do that\n> > in a way that's not confusing in itself. I'm not sure though that\n> > the GUC engine offers a good way.\n> \n> I think it's straightforward, if we decouple the tri-valued enum used\n> for guc.c purposes from a separate boolean that actually enables the\n> feature. GUC sets the boolean to \"off\" initially when it sees the enum\n> as \"auto\", and then pg_stat_statement's _PG_init modifies it during its\n> own startup as needed.\n> \n> So the user can turn the GUC off, and then pg_stat_statement does\n> nothing and there is no performance drawback; or leave it \"auto\" and\n> it'll only compute query_id if pg_stat_statement is loaded; or leave it\n> on if they want the query_id for other purposes.\n\nYeah, this is more-or-less the same as what I was just proposing in an\nemail that crossed this one. Using a separate boolean would certainly\nbe fine.\n\nThanks,\n\nStephen", "msg_date": "Mon, 26 Apr 2021 13:45:49 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Hi,\n\nOn 2021-04-26 13:43:31 -0400, Alvaro Herrera wrote:\n> I think it's straightforward, if we decouple the tri-valued enum used\n> for guc.c purposes from a separate boolean that actually enables the\n> feature. GUC sets the boolean to \"off\" initially when it sees the enum\n> as \"auto\", and then pg_stat_statement's _PG_init modifies it during its\n> own startup as needed.\n\n> So the user can turn the GUC off, and then pg_stat_statement does\n> nothing and there is no performance drawback; or leave it \"auto\" and\n> it'll only compute query_id if pg_stat_statement is loaded; or leave it\n> on if they want the query_id for other purposes.\n\nI think that's the right direction. I wonder though if we shouldn't go a\nbit further. Have one guc that determines the \"query id provider\" (NULL\nor a shared library), and one GUC that configures whether query-id is\ncomputed (never, on-demand/auto, always). For the provider GUC load the\n.so and look up a function with some well known name.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 26 Apr 2021 11:14:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I think that's the right direction. I wonder though if we shouldn't go a\n> bit further. Have one guc that determines the \"query id provider\" (NULL\n> or a shared library), and one GUC that configures whether query-id is\n> computed (never, on-demand/auto, always). For the provider GUC load the\n> .so and look up a function with some well known name.\n\nThat's sounding like a pretty sane design, actually. Not sure about\nthe shared-library-name-with-fixed-function-name detail, but certainly\nit seems to be useful to separate \"I need a query-id\" from the details\nof the ID calculation.\n\nRather than a GUC per se for the ID provider, maybe we could have a\nfunction hook that defaults to pointing at the in-core computation,\nand then a module wanting to override that just gets into the hook.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Apr 2021 14:21:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Hi,\n\nOn 2021-04-26 14:21:00 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> That's sounding like a pretty sane design, actually. Not sure about\n> the shared-library-name-with-fixed-function-name detail, but certainly\n> it seems to be useful to separate \"I need a query-id\" from the details\n> of the ID calculation.\n> \n> Rather than a GUC per se for the ID provider, maybe we could have a\n> function hook that defaults to pointing at the in-core computation,\n> and then a module wanting to override that just gets into the hook.\n\nI have a preference to determining the provider via GUC instead of a\nhook because it is both easier to introspect and easier to configure.\n\nIf the provider is loaded via a hook, and the shared library is loaded\nvia shared_preload_libraries, one can't easily just turn that off in a\nsingle session, but needs to restart or explicitly load a different\nlibrary (that can't already be loaded).\n\nWe also don't have any way to show what's hooking into a hook.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 26 Apr 2021 11:37:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Mon, Apr 26, 2021 at 8:14 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-04-26 13:43:31 -0400, Alvaro Herrera wrote:\n> > I think it's straightforward, if we decouple the tri-valued enum used\n> > for guc.c purposes from a separate boolean that actually enables the\n> > feature. GUC sets the boolean to \"off\" initially when it sees the enum\n> > as \"auto\", and then pg_stat_statement's _PG_init modifies it during its\n> > own startup as needed.\n\nThat's pretty much exactly my original suggestion, yes :)\n\n\n> > So the user can turn the GUC off, and then pg_stat_statement does\n> > nothing and there is no performance drawback; or leave it \"auto\" and\n> > it'll only compute query_id if pg_stat_statement is loaded; or leave it\n> > on if they want the query_id for other purposes.\n>\n> I think that's the right direction. I wonder though if we shouldn't go a\n> bit further. Have one guc that determines the \"query id provider\" (NULL\n> or a shared library), and one GUC that configures whether query-id is\n> computed (never, on-demand/auto, always). For the provider GUC load the\n> .so and look up a function with some well known name.\n\nOn Mon, Apr 26, 2021 at 8:37 PM Andres Freund <andres@anarazel.de> wrote:\n> I have a preference to determining the provider via GUC instead of a\n> hook because it is both easier to introspect and easier to configure.\n\n+1 in general. Though we could of course also have a read-only\ninternal GUC that would show what we ended up with, and still\nconfigure it with shared_preload_libraries, or loaded in some other\nway. In a way it'd be cleaner to \"always load modules with\nshared_preload_libraries\", but I can certainly see the arguments in\neither direction..\n\nBut whichever way it's configured, having a well exposed way to know\nwhat it actually is would be important.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Mon, 26 Apr 2021 20:43:21 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Mon, Apr 26, 2021 at 11:37:45AM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2021-04-26 14:21:00 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > That's sounding like a pretty sane design, actually. Not sure about\n> > the shared-library-name-with-fixed-function-name detail, but certainly\n> > it seems to be useful to separate \"I need a query-id\" from the details\n> > of the ID calculation.\n> > \n> > Rather than a GUC per se for the ID provider, maybe we could have a\n> > function hook that defaults to pointing at the in-core computation,\n> > and then a module wanting to override that just gets into the hook.\n> \n> I have a preference to determining the provider via GUC instead of a\n> hook because it is both easier to introspect and easier to configure.\n\nIn any case, having a different provider would greatly simplify third-party\nqueryid lib authors and users life. For now the core queryid is computed\nbefore post_parse_analyze_hook, but any third party plugin would have to do it\nas a post_parse_analyze_hook, so you have to make sure that the lib is at the\nright position in shared_preload_libraries to have it work, eg. [1], depending\non how pg_stat_statements and other similar module call\nprev_post_parse_analyze_hook, which is a pretty bad thing.\n\n> If the provider is loaded via a hook, and the shared library is loaded\n> via shared_preload_libraries, one can't easily just turn that off in a\n> single session, but needs to restart or explicitly load a different\n> library (that can't already be loaded).\n\nOn the other hand we *don't* want to dynamically change the provider.\nTemporarily enabling/disabling queryid calculation is ok, but generating\ndifferent have for the same query isn't.\n\n> We also don't have any way to show what's hooking into a hook.\n\nIf we had a dedicated query_id hook, then plugins should error out if users\nconfigured multiple plugins to calculate a query_id, so it should be easy to\nknow which plugin is responsible for it without knowing who hooked into the\nhook.\n\n[1] https://github.com/rjuju/pg_queryid/blob/master/pg_queryid.c#L116-L117\n\n\n", "msg_date": "Tue, 27 Apr 2021 14:25:04 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Tue, Apr 27, 2021 at 02:25:04PM +0800, Julien Rouhaud wrote:\n> On Mon, Apr 26, 2021 at 11:37:45AM -0700, Andres Freund wrote:\n>> On 2021-04-26 14:21:00 -0400, Tom Lane wrote:\n>>> That's sounding like a pretty sane design, actually. Not sure about\n>>> the shared-library-name-with-fixed-function-name detail, but certainly\n>>> it seems to be useful to separate \"I need a query-id\" from the details\n>>> of the ID calculation.\n>>> \n>>> Rather than a GUC per se for the ID provider, maybe we could have a\n>>> function hook that defaults to pointing at the in-core computation,\n>>> and then a module wanting to override that just gets into the hook.\n>> \n>> I have a preference to determining the provider via GUC instead of a\n>> hook because it is both easier to introspect and easier to configure.\n\nSo, this thread has died two weeks ago, and it is still an open item.\nCould it be possible to move to a resolution by beta1? The consensus\nI can get from the thread is that we should have a tri-value state to\ntrack an extra \"auto\" for the query ID computation, as proposed by\nAlvaro here:\nhttps://www.postgresql.org/message-id/20210426174331.GA19401@alvherre.pgsql\n\nUnfortunately, nothing has happened to be able to do something like\nthat.\n--\nMichael", "msg_date": "Tue, 11 May 2021 15:04:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "\n\nOn 2021/05/11 15:04, Michael Paquier wrote:\n> On Tue, Apr 27, 2021 at 02:25:04PM +0800, Julien Rouhaud wrote:\n>> On Mon, Apr 26, 2021 at 11:37:45AM -0700, Andres Freund wrote:\n>>> On 2021-04-26 14:21:00 -0400, Tom Lane wrote:\n>>>> That's sounding like a pretty sane design, actually. Not sure about\n>>>> the shared-library-name-with-fixed-function-name detail, but certainly\n>>>> it seems to be useful to separate \"I need a query-id\" from the details\n>>>> of the ID calculation.\n>>>>\n>>>> Rather than a GUC per se for the ID provider, maybe we could have a\n>>>> function hook that defaults to pointing at the in-core computation,\n>>>> and then a module wanting to override that just gets into the hook.\n>>>\n>>> I have a preference to determining the provider via GUC instead of a\n>>> hook because it is both easier to introspect and easier to configure.\n> \n> So, this thread has died two weeks ago, and it is still an open item.\n> Could it be possible to move to a resolution by beta1? The consensus\n> I can get from the thread is that we should have a tri-value state to\n> track an extra \"auto\" for the query ID computation, as proposed by\n> Alvaro here:\n> https://www.postgresql.org/message-id/20210426174331.GA19401@alvherre.pgsql\n\n+1\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 11 May 2021 15:34:18 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Tue, May 11, 2021 at 03:04:13PM +0900, Michael Paquier wrote:\n> On Tue, Apr 27, 2021 at 02:25:04PM +0800, Julien Rouhaud wrote:\n> > On Mon, Apr 26, 2021 at 11:37:45AM -0700, Andres Freund wrote:\n> >> On 2021-04-26 14:21:00 -0400, Tom Lane wrote:\n> >>> That's sounding like a pretty sane design, actually. Not sure about\n> >>> the shared-library-name-with-fixed-function-name detail, but certainly\n> >>> it seems to be useful to separate \"I need a query-id\" from the details\n> >>> of the ID calculation.\n> >>> \n> >>> Rather than a GUC per se for the ID provider, maybe we could have a\n> >>> function hook that defaults to pointing at the in-core computation,\n> >>> and then a module wanting to override that just gets into the hook.\n> >> \n> >> I have a preference to determining the provider via GUC instead of a\n> >> hook because it is both easier to introspect and easier to configure.\n> \n> So, this thread has died two weeks ago, and it is still an open item.\n> Could it be possible to move to a resolution by beta1? The consensus\n> I can get from the thread is that we should have a tri-value state to\n> track an extra \"auto\" for the query ID computation, as proposed by\n> Alvaro here:\n> https://www.postgresql.org/message-id/20210426174331.GA19401@alvherre.pgsql\n> \n> Unfortunately, nothing has happened to be able to do something like\n> that.\n\nMy understanding was that there wasn't a consensus on how to fix the problem.\n\nAnyway, PFA a patch that implement a [off | on | auto] compute_query_id, and\nprovides a new queryIdWanted() function to let third party plugins inform us\nthat they want a query id if possible.\n\nAs it was noted somewhere in that thread, that's a hack on top on the GUC\nmachinery, so compute_query_id will display \"on\" rather than \"auto\" (or \"auto\nand enabled\" or whatever) since GUC isn't designed to handle that behavior.\n\nFor the record I also tested the patch using pg_qualstats(), which can be\nloaded interactively and also benefits from a query identifier. It works as\nexpected, as in \"query idenfitier are enabled but only for the backend that\nloaded pg_qualstats\".", "msg_date": "Tue, 11 May 2021 15:35:39 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Tue, May 11, 2021 at 8:04 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Apr 27, 2021 at 02:25:04PM +0800, Julien Rouhaud wrote:\n> > On Mon, Apr 26, 2021 at 11:37:45AM -0700, Andres Freund wrote:\n> >> On 2021-04-26 14:21:00 -0400, Tom Lane wrote:\n> >>> That's sounding like a pretty sane design, actually. Not sure about\n> >>> the shared-library-name-with-fixed-function-name detail, but certainly\n> >>> it seems to be useful to separate \"I need a query-id\" from the details\n> >>> of the ID calculation.\n> >>>\n> >>> Rather than a GUC per se for the ID provider, maybe we could have a\n> >>> function hook that defaults to pointing at the in-core computation,\n> >>> and then a module wanting to override that just gets into the hook.\n> >>\n> >> I have a preference to determining the provider via GUC instead of a\n> >> hook because it is both easier to introspect and easier to configure.\n>\n> So, this thread has died two weeks ago, and it is still an open item.\n> Could it be possible to move to a resolution by beta1? The consensus\n> I can get from the thread is that we should have a tri-value state to\n> track an extra \"auto\" for the query ID computation, as proposed by\n> Alvaro here:\n> https://www.postgresql.org/message-id/20210426174331.GA19401@alvherre.pgsql\n\n\nTechnically I think that was my suggestion from earlier in that thread\nthat Alvaro just +1ed :)\n\nThat said, I sort of put that one aside when both Bruce and Tom\nconsidered it \"a pretty weird API\" to quote Bruce. I had missed the\nfact that Tom changed his mind (maybe when picking up on more of the\ndetails).\n\nAnd FTR, I still think this is the best way forward.\n\nI think Andres also raised a good point about the ability to actually\nknow which one is in use.\n\nEven if we keep the current way of *setting* the hook, I think it\nmight be worthwhile to expose a PGC_INTERNAL guc that shows *which*\nimplementation is actually in use?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Tue, 11 May 2021 09:39:06 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Tue, May 11, 2021 at 9:35 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Tue, May 11, 2021 at 03:04:13PM +0900, Michael Paquier wrote:\n> > On Tue, Apr 27, 2021 at 02:25:04PM +0800, Julien Rouhaud wrote:\n> > > On Mon, Apr 26, 2021 at 11:37:45AM -0700, Andres Freund wrote:\n> > >> On 2021-04-26 14:21:00 -0400, Tom Lane wrote:\n> > >>> That's sounding like a pretty sane design, actually. Not sure about\n> > >>> the shared-library-name-with-fixed-function-name detail, but certainly\n> > >>> it seems to be useful to separate \"I need a query-id\" from the details\n> > >>> of the ID calculation.\n> > >>>\n> > >>> Rather than a GUC per se for the ID provider, maybe we could have a\n> > >>> function hook that defaults to pointing at the in-core computation,\n> > >>> and then a module wanting to override that just gets into the hook.\n> > >>\n> > >> I have a preference to determining the provider via GUC instead of a\n> > >> hook because it is both easier to introspect and easier to configure.\n> >\n> > So, this thread has died two weeks ago, and it is still an open item.\n> > Could it be possible to move to a resolution by beta1? The consensus\n> > I can get from the thread is that we should have a tri-value state to\n> > track an extra \"auto\" for the query ID computation, as proposed by\n> > Alvaro here:\n> > https://www.postgresql.org/message-id/20210426174331.GA19401@alvherre.pgsql\n> >\n> > Unfortunately, nothing has happened to be able to do something like\n> > that.\n>\n> My understanding was that there wasn't a consensus on how to fix the problem.\n>\n> Anyway, PFA a patch that implement a [off | on | auto] compute_query_id, and\n> provides a new queryIdWanted() function to let third party plugins inform us\n> that they want a query id if possible.\n\n30 second review -- wouldn't it be cleaner to keep a separate boolean\ntelling the backend \"include it or not\", which is set to true/false in\nthe guc assign hook and can then be flipped from false->true in\nqueryIdWanted()? (I'd suggest a more verbose name for that function\nbtw, something like requestQueryIdGeneration() or so).\n\n(Again, just the 30 second review between meetings, so maybe I'm completely off)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Tue, 11 May 2021 09:43:25 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Tue, May 11, 2021 at 09:43:25AM +0200, Magnus Hagander wrote:\n> \n> 30 second review -- wouldn't it be cleaner to keep a separate boolean\n> telling the backend \"include it or not\", which is set to true/false in\n> the guc assign hook and can then be flipped from false->true in\n> queryIdWanted()? (I'd suggest a more verbose name for that function\n> btw, something like requestQueryIdGeneration() or so).\n> \n> (Again, just the 30 second review between meetings, so maybe I'm completely off)\n\nIt it surely would, but then that variable would need to be explicitly handled\nas it wouldn't be automatically inherited on Windows and EXEC_BACKEND right?\n\n\n", "msg_date": "Tue, 11 May 2021 15:49:47 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "\n\nOn 2021/05/11 16:35, Julien Rouhaud wrote:\n> Anyway, PFA a patch that implement a [off | on | auto] compute_query_id, and\n> provides a new queryIdWanted() function to let third party plugins inform us\n> that they want a query id if possible.\n\nThanks!\n\n\n> As it was noted somewhere in that thread, that's a hack on top on the GUC\n> machinery, so compute_query_id will display \"on\" rather than \"auto\" (or \"auto\n> and enabled\" or whatever) since GUC isn't designed to handle that behavior.\n\nCan't we work around this issue by making queryIdWanted() set another flag like query_id_wanted instead of overwriting compute_query_id? If we do this, query id computation is necessary when \"compute_query_id == COMPUTE_QUERY_ID_ON || (compute_query_id == COMPUTE_QUERY_ID_AUTO && query_id_wanted)\".\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 11 May 2021 17:41:53 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Tue, May 11, 2021 at 05:41:53PM +0900, Fujii Masao wrote:\n> \n> On 2021/05/11 16:35, Julien Rouhaud wrote:\n> > Anyway, PFA a patch that implement a [off | on | auto] compute_query_id, and\n> > provides a new queryIdWanted() function to let third party plugins inform us\n> > that they want a query id if possible.\n> \n> Thanks!\n> \n> \n> > As it was noted somewhere in that thread, that's a hack on top on the GUC\n> > machinery, so compute_query_id will display \"on\" rather than \"auto\" (or \"auto\n> > and enabled\" or whatever) since GUC isn't designed to handle that behavior.\n> \n> Can't we work around this issue by making queryIdWanted() set another flag like query_id_wanted instead of overwriting compute_query_id? If we do this, query id computation is necessary when \"compute_query_id == COMPUTE_QUERY_ID_ON || (compute_query_id == COMPUTE_QUERY_ID_AUTO && query_id_wanted)\".\n\nThat's exactly what Magnus mentioned :) It's not possible because variable\naren't inherited on Windows or EXEC_BACKEND. I didn't check but I'm\nassuming that it could work if the other flag was an internal GUC that couldn't\nbe set by users, but then we would have some kind of internal flag that would\nhave to be documented as \"how to check if compute_query_id\" is actually enabled\nor not, which doesn't seem like a good idea.\n\nAnother approach would be to add a new \"auto (enabled)\" option to the enum, and\nprevent users from manually setting the guc to that value. It's not perfect\nbut maybe it would be cleaner.\n\nOverall it seems that we don't have a clear consensus on how exactly to address\nthe problem, which is why I originally didn't sent a patch.\n\n\n", "msg_date": "Tue, 11 May 2021 16:52:19 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Tue, May 11, 2021 at 10:51 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Tue, May 11, 2021 at 05:41:53PM +0900, Fujii Masao wrote:\n> >\n> > On 2021/05/11 16:35, Julien Rouhaud wrote:\n> > > Anyway, PFA a patch that implement a [off | on | auto] compute_query_id, and\n> > > provides a new queryIdWanted() function to let third party plugins inform us\n> > > that they want a query id if possible.\n> >\n> > Thanks!\n> >\n> >\n> > > As it was noted somewhere in that thread, that's a hack on top on the GUC\n> > > machinery, so compute_query_id will display \"on\" rather than \"auto\" (or \"auto\n> > > and enabled\" or whatever) since GUC isn't designed to handle that behavior.\n> >\n> > Can't we work around this issue by making queryIdWanted() set another flag like query_id_wanted instead of overwriting compute_query_id? If we do this, query id computation is necessary when \"compute_query_id == COMPUTE_QUERY_ID_ON || (compute_query_id == COMPUTE_QUERY_ID_AUTO && query_id_wanted)\".\n>\n> That's exactly what Magnus mentioned :) It's not possible because variable\n> aren't inherited on Windows or EXEC_BACKEND. I didn't check but I'm\n> assuming that it could work if the other flag was an internal GUC that couldn't\n> be set by users, but then we would have some kind of internal flag that would\n> have to be documented as \"how to check if compute_query_id\" is actually enabled\n> or not, which doesn't seem like a good idea.\n\nThat doesn't fundamentally make it impossible, you just have to add it\nto the list of variables being copied over, wouldn't you? See\nsave_backend_variables()\n\n--\n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Tue, 11 May 2021 10:59:51 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Tue, May 11, 2021 at 10:59:51AM +0200, Magnus Hagander wrote:\n> \n> That doesn't fundamentally make it impossible, you just have to add it\n> to the list of variables being copied over, wouldn't you? See\n> save_backend_variables()\n\nYes, I agree, and that's what I meant by \"explicitly handled\". The thing is\nthat I don't know if that's the best way to go, as it doesn't solve the \"is it\nactually enabled\" and/or \"which implementation is used\". At least the patch I\nsent, although it's totally a hack, let you know if compute_query_id is enabled\nor not. I'm fine with implementing it that way, but only if there's a\nconsensus.\n\n\n", "msg_date": "Tue, 11 May 2021 17:41:06 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Tue, May 11, 2021 at 05:41:06PM +0800, Julien Rouhaud wrote:\n> On Tue, May 11, 2021 at 10:59:51AM +0200, Magnus Hagander wrote:\n> > \n> > That doesn't fundamentally make it impossible, you just have to add it\n> > to the list of variables being copied over, wouldn't you? See\n> > save_backend_variables()\n> \n> Yes, I agree, and that's what I meant by \"explicitly handled\". The thing is\n> that I don't know if that's the best way to go, as it doesn't solve the \"is it\n> actually enabled\" and/or \"which implementation is used\". At least the patch I\n> sent, although it's totally a hack, let you know if compute_query_id is enabled\n> or not. I'm fine with implementing it that way, but only if there's a\n> consensus.\n\nActually, isn't that how e.g. wal_buffers = -1 is working? The original value\nis lost and what you get is the computed value. This is a bit different here\nas the value isn't always changed, and can be changed interactively but\notherwise it's the same behavior.\n\n\n", "msg_date": "Tue, 11 May 2021 18:52:49 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "At Tue, 11 May 2021 18:52:49 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> On Tue, May 11, 2021 at 05:41:06PM +0800, Julien Rouhaud wrote:\n> > On Tue, May 11, 2021 at 10:59:51AM +0200, Magnus Hagander wrote:\n> > > \n> > > That doesn't fundamentally make it impossible, you just have to add it\n> > > to the list of variables being copied over, wouldn't you? See\n> > > save_backend_variables()\n> > \n> > Yes, I agree, and that's what I meant by \"explicitly handled\". The thing is\n> > that I don't know if that's the best way to go, as it doesn't solve the \"is it\n> > actually enabled\" and/or \"which implementation is used\". At least the patch I\n> > sent, although it's totally a hack, let you know if compute_query_id is enabled\n> > or not. I'm fine with implementing it that way, but only if there's a\n> > consensus.\n> \n> Actually, isn't that how e.g. wal_buffers = -1 is working? The original value\n> is lost and what you get is the computed value. This is a bit different here\n> as the value isn't always changed, and can be changed interactively but\n> otherwise it's the same behavior.\n\nIf we look it in pg_settings, it shows the current value and the value\nat boot-time. So I'm fine with that behavior.\n\nHowever, IMHO, I doubt the necessity of \"on\". Assuming that we require\nany module that wants query-id to call queryIdWanted() at any time\nafter each process startup (or it could be inherited to children), I\nthink only \"auto\" and \"off\" are enough for the variable. Thinking in\nthis line, the variable is a subset of a GUC variable to specify the\nname of a query-id provider (as Andres suggested upthread), and I\nthink it would work better in future.\n\nSo for now I propose that we have a variable query_id_provider that\nhas only 'default' and 'none' as the domain. We can later expand it\nso that any other query-id provider modules can be loaded without\nchaning the interface.\n\npostgresql.conf\n# query_id_provider = 'default' # provider module for query-id. 'none' means\n# # disabling query-id calculation.\n\nIf we want to have a direct way to know whether query-id is active or\nnot, it'd be good to have a read-only variable \"query_id_active\".\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n \n\n\n", "msg_date": "Wed, 12 May 2021 11:08:36 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Hello Horiguchi-san,\n\nOn Wed, May 12, 2021 at 11:08:36AM +0900, Kyotaro Horiguchi wrote:\n> \n> If we look it in pg_settings, it shows the current value and the value\n> at boot-time. So I'm fine with that behavior.\n> \n> However, IMHO, I doubt the necessity of \"on\". Assuming that we require\n> any module that wants query-id to call queryIdWanted() at any time\n> after each process startup (or it could be inherited to children), I\n> think only \"auto\" and \"off\" are enough for the variable.\n\nI don't think that this approach would cope well for people who want a queryid\nwithout pg_stat_statements or such. Since the queryid can now be found in\npg_stat_activity, EXPLAIN output or the logs I think it's entirely reasonable\nto allow users to benefit from that even if they don't install additional\nmodule.\n\n> Thinking in\n> this line, the variable is a subset of a GUC variable to specify the\n> name of a query-id provider (as Andres suggested upthread), and I\n> think it would work better in future.\n> \n> So for now I propose that we have a variable query_id_provider that\n> has only 'default' and 'none' as the domain.\n\nI think this would be a mistake to do that, as it would mean that we don't\nofficially support alternative queryid provider.\n\n> We can later expand it\n> so that any other query-id provider modules can be loaded without\n> chaning the interface.\n\nThe GUC itself may not change, but third-party queryid provider would probably\nneed changes as the new entry point will be dedicated to compute a queryid\nonly, while third-party plugins may do more than that in their\npost_parse_analyze_hook. And also users will have to change their\nconfiguration to use that new interface, and additionally the module may now\nhave to be removed from shared_preload_libraries. Overall, it doesn't seem to\nme that it would make users' life easier.\n\n\n", "msg_date": "Wed, 12 May 2021 10:42:01 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "At Wed, 12 May 2021 10:42:01 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> Hello Horiguchi-san,\n> \n> On Wed, May 12, 2021 at 11:08:36AM +0900, Kyotaro Horiguchi wrote:\n> > \n> > If we look it in pg_settings, it shows the current value and the value\n> > at boot-time. So I'm fine with that behavior.\n> > \n> > However, IMHO, I doubt the necessity of \"on\". Assuming that we require\n> > any module that wants query-id to call queryIdWanted() at any time\n> > after each process startup (or it could be inherited to children), I\n> > think only \"auto\" and \"off\" are enough for the variable.\n> \n> I don't think that this approach would cope well for people who want a queryid\n> without pg_stat_statements or such. Since the queryid can now be found in\n> pg_stat_activity, EXPLAIN output or the logs I think it's entirely reasonable\n> to allow users to benefit from that even if they don't install additional\n> module.\n\nAh, I missed that case. And we are wanting to use pg_stat_statements\nwith (almost) zero-config? How about the following behavior?\n\nSetting query_id_provider to 'none' means we don't calculate query-id\nby default. However, if queryIdWante() is called, the default provider\nis set up and starts calculating query id.\n\nSetting query_id_provider to something else means the user wants\nquery-id calcualted using the provider. Setting 'default' is\nequivalent to setting compute_query_id to 'on'.\n\nThere might be a case where a user sets query_id_provider to\nnon-'none' but don't want have query-id calculated, but it can be said\na kind of mis-configuration?\n\n> > Thinking in\n> > this line, the variable is a subset of a GUC variable to specify the\n> > name of a query-id provider (as Andres suggested upthread), and I\n> > think it would work better in future.\n> > \n> > So for now I propose that we have a variable query_id_provider that\n> > has only 'default' and 'none' as the domain.\n> \n> I think this would be a mistake to do that, as it would mean that we don't\n> officially support alternative queryid provider.\n\nOk, if we want to support alternative providers from the first, we\nneed to actually write the loader code for query-id providers. It\nwould not be so hard?, but it might not be suitable to this stage so I\nproposed that to get rid of needing such complexity for now.\n\n(Anyway I prefer to load query-id provider as a dynamically loadable\n module rather than hook-function.)\n\n> > We can later expand it\n> > so that any other query-id provider modules can be loaded without\n> > chaning the interface.\n> \n> The GUC itself may not change, but third-party queryid provider would probably\n> need changes as the new entry point will be dedicated to compute a queryid\n> only, while third-party plugins may do more than that in their\n> post_parse_analyze_hook. And also users will have to change their\n\nI don't think it is not that a problem. Even if any third-party\nextension is having query-id generator by itself, in most cases it\nwould be a copy of JumbleQuery in case of pg_stat_statement is not\nloaded and now it is moved in-core as 'default' provider. What the\nexntension needs to be done is just ripping out the copied generator\ncode. I guess...\n\n> configuration to use that new interface, and additionally the module may now\n> have to be removed from shared_preload_libraries. Overall, it doesn't seem to\n> me that it would make users' life easier.\n\nWhy the third-party module need to be removed from\nshared_preload_libraries? The module can stay as a preloaded shared\nlibrary but just no longer need to have its own query-id provider\nsince it is provided in-core. If the extension required a specific\nprovider, the developer need to make it a loadable module and users\nneed to specify the provider module explicitly. I don't think that is\nnot a problem but if we wanted to make it easier, we can let users\nfree from that step by allowing 'auto' for query-id-provider to load\nany module by the first extension.\n\nSo, for example, how about the following interface?\n\nGUC query_id_provider:\n\n- 'none' : query_id is not calculated, don't allow loading external\n generator module.\n\n- 'default' : use default provider and calculate query-id.\n\n- '<provider-name>' : use the provider and calculate query-id using it.\n\n- 'auto' : query_id is not calculated, but allow to load query-id\n provider if queryIdWanted() is called.\n\n# of course 'auto' and 'default' are inhibited as the provier name.\n\n- core function bool queryIdWanted(char *provider_name, bool use_existing)\n\n Allows extensions to request to load a provider if not yet, then\n start calculating query-id. Returns true if the request is accepted.\n\n provider_name :\n\n - 'default' or '<provider-name>': requests the provider to be loaded\n and start calculating query-id. Refuse the request if 'none' is\n set to query_id_provider.\n\n use_existing: Set true to allow using a provider already loaded.\n Otherwise refuses the request if any other provider than\n prvoder_name is already loaded.\n\nIn most cases users set query_id_provider to 'auto' and extensions\ncall queryIdWanted with ('default', true).\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 12 May 2021 14:33:35 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "At Wed, 12 May 2021 14:33:35 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Ok, if we want to support alternative providers from the first, we\n> need to actually write the loader code for query-id providers. It\n> would not be so hard?, but it might not be suitable to this stage so I\n> proposed that to get rid of needing such complexity for now.\n> \n> (Anyway I prefer to load query-id provider as a dynamically loadable\n> module rather than hook-function.)\n...\n> So, for example, how about the following interface?\n> \n> GUC query_id_provider:\n> \n> - 'none' : query_id is not calculated, don't allow loading external\n> generator module.\n> \n> - 'default' : use default provider and calculate query-id.\n> \n> - '<provider-name>' : use the provider and calculate query-id using it.\n> \n> - 'auto' : query_id is not calculated, but allow to load query-id\n> provider if queryIdWanted() is called.\n> \n> # of course 'auto' and 'default' are inhibited as the provier name.\n> \n> - core function bool queryIdWanted(char *provider_name, bool use_existing)\n> \n> Allows extensions to request to load a provider if not yet, then\n> start calculating query-id. Returns true if the request is accepted.\n> \n> provider_name :\n> \n> - 'default' or '<provider-name>': requests the provider to be loaded\n> and start calculating query-id. Refuse the request if 'none' is\n> set to query_id_provider.\n> \n> use_existing: Set true to allow using a provider already loaded.\n> Otherwise refuses the request if any other provider than\n> prvoder_name is already loaded.\n> \n> In most cases users set query_id_provider to 'auto' and extensions\n> call queryIdWanted with ('default', true).\n\nHmm. They are in contradiction. Based on this future picture, at this\nstage it can be simplified to allowing only 'default' as the provider\nname.\n\nIf you want to support any other provider at this point,,, we need to\nimlement the full-spec?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 12 May 2021 14:44:47 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Hi\n\n\n> Ah, I missed that case. And we are wanting to use pg_stat_statements\n> with (almost) zero-config? How about the following behavior?\n>\n>\nUntil now, the pg_stat_statements was zero-config. So the change is not\nuser friendly.\n\nThe idea so pg_stat_statements requires enabled computed_query_id is not\ngood. There should be dependency only on the queryid column.\n\nRegards\n\nPavel\n\nHi\n\nAh, I missed that case.  And we are wanting to use pg_stat_statements\nwith (almost) zero-config?  How about the following behavior?\nUntil now, the pg_stat_statements was zero-config. So the change is not user friendly.The idea so pg_stat_statements requires enabled computed_query_id is not good. There should be dependency only on the queryid column.RegardsPavel", "msg_date": "Wed, 12 May 2021 07:49:13 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Wed, May 12, 2021 at 02:33:35PM +0900, Kyotaro Horiguchi wrote:\n> At Wed, 12 May 2021 10:42:01 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> > \n> > I don't think that this approach would cope well for people who want a queryid\n> > without pg_stat_statements or such. Since the queryid can now be found in\n> > pg_stat_activity, EXPLAIN output or the logs I think it's entirely reasonable\n> > to allow users to benefit from that even if they don't install additional\n> > module.\n> \n> Ah, I missed that case. And we are wanting to use pg_stat_statements\n> with (almost) zero-config? How about the following behavior?\n> \n> Setting query_id_provider to 'none' means we don't calculate query-id\n> by default. However, if queryIdWante() is called, the default provider\n> is set up and starts calculating query id.\n\nHaving \"none\" meant \"not unless someone asks for it\" looks like a POLA\nviolation.\n\n> Setting query_id_provider to something else means the user wants\n> query-id calcualted using the provider. Setting 'default' is\n> equivalent to setting compute_query_id to 'on'.\n> \n> There might be a case where a user sets query_id_provider to\n> non-'none' but don't want have query-id calculated, but it can be said\n> a kind of mis-configuration?\n\nSo if I'm understanding correctly, you're arguing for an approach different to\nwhat Michael stated as the general consensus in [1]. I'm not saying that I\nthink it's a bad idea (and I actually suggested it before), but we have to\nchose an approach and stick with it.\n\n> > I think this would be a mistake to do that, as it would mean that we don't\n> > officially support alternative queryid provider.\n> \n> Ok, if we want to support alternative providers from the first, we\n> need to actually write the loader code for query-id providers. It\n> would not be so hard?, but it might not be suitable to this stage so I\n> proposed that to get rid of needing such complexity for now.\n\nI did write a POC extension [2] to demonstrate that moving pg_stat_statement's\nqueryid calculation in core doesn't mean that we're imposing it to everyone.\nAnd yes this is critical and a must have in the initial implementation.\n\n> (Anyway I prefer to load query-id provider as a dynamically loadable\n> module rather than hook-function.)\n\nI agree that having a specific API (I'm fine with a hook or a dynamically\nloaded function) for that would be better, but it doesn't appear to be the\nopinion of the majority.\n\n> > The GUC itself may not change, but third-party queryid provider would probably\n> > need changes as the new entry point will be dedicated to compute a queryid\n> > only, while third-party plugins may do more than that in their\n> > post_parse_analyze_hook. And also users will have to change their\n> \n> I don't think it is not that a problem.\n\nDid you mean \"I don't think that it's a problem\"? Otherwise I don't get it.\n\n> Even if any third-party\n> extension is having query-id generator by itself, in most cases it\n> would be a copy of JumbleQuery in case of pg_stat_statement is not\n> loaded and now it is moved in-core as 'default' provider. What the\n> exntension needs to be done is just ripping out the copied generator\n> code. I guess...\n\nI don't fully understand, but it seems that you're arguing that the only use\ncase is to have something similar to pg_stat_statements (say e.g.\npg_store_plans), that always have the same queryid implementation as\npg_stat_statements. That's not the case, as there already are \"clones\" of\npg_stat_statements, and the main difference is an alternative queryid\nimplementation. So in that case what author would do is to drop everything\n*except* the queryid implementation.\n\nAnd if I'm not mistaken, pg_store_plans also wants a different queryid\nimplementation, but has to handle a secondary queryid on top of it\n(https://github.com/ossc-db/pg_store_plans/blob/master/pg_store_plans.c#L843-L855).\nSo here again what the extension want is to get rid of pg_stat_statements (and\nnow core) queryid implementation.\n\n> > configuration to use that new interface, and additionally the module may now\n> > have to be removed from shared_preload_libraries. Overall, it doesn't seem to\n> > me that it would make users' life easier.\n> \n> Why the third-party module need to be removed from\n> shared_preload_libraries? The module can stay as a preloaded shared\n> library but just no longer need to have its own query-id provider\n> since it is provided in-core. If the extension required a specific\n> provider, the developer need to make it a loadable module and users\n> need to specify the provider module explicitly.\n\nIt's the same misunderstanding here. Basically people want to benefit from the\nwhole ecosystem based on a queryid (pg_stat_statements, now\npg_stat_activity.query_id and such) but with another definition of what a\nqueryid is. So those people will now only need to implement something like\n[2], rather than forking every single extension they want to use.\n\n\n[1]: https://www.postgresql.org/message-id/YJoeXcrwe1EDmqKT@paquier.xyz\n[2]: https://github.com/rjuju/pg_queryid\n\n\n", "msg_date": "Wed, 12 May 2021 14:05:16 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Wed, May 12, 2021 at 07:49:13AM +0200, Pavel Stehule wrote:\n> \n> > Ah, I missed that case. And we are wanting to use pg_stat_statements\n> > with (almost) zero-config? How about the following behavior?\n> >\n> >\n> Until now, the pg_stat_statements was zero-config. So the change is not\n> user friendly.\n\nApart from configuring shared_preload_libraries, but agreed.\n\n> The idea so pg_stat_statements requires enabled computed_query_id is not\n> good. There should be dependency only on the queryid column.\n\nI agree that requiring to change compute_query_id when you already added\npg_stat_statements in shared_preload_libraries isn't good, and the patch I sent\nyesterday would fix that.\n\n\n", "msg_date": "Wed, 12 May 2021 14:10:49 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "st 12. 5. 2021 v 8:10 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> On Wed, May 12, 2021 at 07:49:13AM +0200, Pavel Stehule wrote:\n> >\n> > > Ah, I missed that case. And we are wanting to use pg_stat_statements\n> > > with (almost) zero-config? How about the following behavior?\n> > >\n> > >\n> > Until now, the pg_stat_statements was zero-config. So the change is not\n> > user friendly.\n>\n> Apart from configuring shared_preload_libraries, but agreed.\n>\n> > The idea so pg_stat_statements requires enabled computed_query_id is not\n> > good. There should be dependency only on the queryid column.\n>\n> I agree that requiring to change compute_query_id when you already added\n> pg_stat_statements in shared_preload_libraries isn't good, and the patch I\n> sent\n> yesterday would fix that.\n>\n\nI don't like the idea of implicit force enabling any feature flag, but it\nis better than current design. But it doesn't look like a robust solution.\n\nDoes it mean that if somebody disables computed_query_id, then\npg_stat_statements will not work?\n\nWhy is there the strong dependency between computed_query_id and\npg_stat_statements? Can this dependency be just optional?\n\nRegards\n\nPavel\n\nst 12. 5. 2021 v 8:10 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Wed, May 12, 2021 at 07:49:13AM +0200, Pavel Stehule wrote:\n> \n> > Ah, I missed that case.  And we are wanting to use pg_stat_statements\n> > with (almost) zero-config?  How about the following behavior?\n> >\n> >\n> Until now, the pg_stat_statements was zero-config. So the change is not\n> user friendly.\n\nApart from configuring shared_preload_libraries, but agreed.\n\n> The idea so pg_stat_statements requires enabled computed_query_id is not\n> good. There should be dependency only on the queryid column.\n\nI agree that requiring to change compute_query_id when you already added\npg_stat_statements in shared_preload_libraries isn't good, and the patch I sent\nyesterday would fix that.I don't like the idea of implicit force enabling any feature flag, but it is better than current design. But it doesn't look like a robust solution.Does it mean that if somebody disables computed_query_id, then pg_stat_statements will not work?Why is there the strong dependency between computed_query_id and pg_stat_statements? Can this dependency be just optional?RegardsPavel", "msg_date": "Wed, 12 May 2021 08:58:45 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Wed, May 12, 2021 at 08:58:45AM +0200, Pavel Stehule wrote:\n> \n> I don't like the idea of implicit force enabling any feature flag, but it\n> is better than current design. But it doesn't look like a robust solution.\n> \n> Does it mean that if somebody disables computed_query_id, then\n> pg_stat_statements will not work?\n\nIt depends, but if you mean \"setting up pg_stat_statements, intentionally\ndisabling in-core queryid calculation and not configuring an alternative\nsource\" then yes pg_stat_statements will not work. But I don't see any\ndifference from \"someone reduce wal_level and complain that replication does\nnot work\" or \"someone disable fsync and complain that data got corrupted\". We\nprovide a sensible default configuration, you can mess it up if you don't know\nwhat you're doing.\n\n> Why is there the strong dependency between computed_query_id and\n> pg_stat_statements? Can this dependency be just optional?\n\nOnce again no, as it otherwise would mean that postgres unilaterally decides\nthat pg_stat_statements' approach to compute a query identifier is the one and\nonly ultimate truth and nothing else could be useful for anyone.\n\n\n", "msg_date": "Wed, 12 May 2021 15:13:39 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "st 12. 5. 2021 v 9:13 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> On Wed, May 12, 2021 at 08:58:45AM +0200, Pavel Stehule wrote:\n> >\n> > I don't like the idea of implicit force enabling any feature flag, but it\n> > is better than current design. But it doesn't look like a robust\n> solution.\n> >\n> > Does it mean that if somebody disables computed_query_id, then\n> > pg_stat_statements will not work?\n>\n> It depends, but if you mean \"setting up pg_stat_statements, intentionally\n> disabling in-core queryid calculation and not configuring an alternative\n> source\" then yes pg_stat_statements will not work. But I don't see any\n> difference from \"someone reduce wal_level and complain that replication\n> does\n> not work\" or \"someone disable fsync and complain that data got\n> corrupted\". We\n> provide a sensible default configuration, you can mess it up if you don't\n> know\n> what you're doing.\n>\n\n> > Why is there the strong dependency between computed_query_id and\n> > pg_stat_statements? Can this dependency be just optional?\n>\n> Once again no, as it otherwise would mean that postgres unilaterally\n> decides\n> that pg_stat_statements' approach to compute a query identifier is the one\n> and\n> only ultimate truth and nothing else could be useful for anyone.\n>\n\nok. Understand.\n\nIf I understand well, then computed_query_id does not make sense for\npg_stat_statemenst, because this extension always requires it.\n\nCannot be better to use queryid inside pg_stat_statements every time\nwithout dependency on computed_query_id? And computed_query_id can be used\nonly for EXPLAIN and for pg_stat_activity.\n\npg_stat_statements cannot work without a queryid, so is useless to speak\nabout configuration. If you use pg_stat_statements, then the queryid will\nbe computed every time, but the visibility will be only for\npg_stat_statements.\n\nOr a different strategy. I understand so computed_query_id should be\nactive. But I dislike the empty result of pg_stat_statements when\ncomputed_query_id is off. Is it possible to raise an exception instead of\nshowing an empty result?\n\nThe most correct fix from my perspective is just check in function\npg_stat_statements if query id is computed or not. If not, and there is no\ndata, then raise an exception with the hint \"enable compute_query_id\". When\nthere is data, then show a warning with the mentioned hint and show data.\n\nWhat do you think about it?\n\nPavel\n\nst 12. 5. 2021 v 9:13 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Wed, May 12, 2021 at 08:58:45AM +0200, Pavel Stehule wrote:\n> \n> I don't like the idea of implicit force enabling any feature flag, but it\n> is better than current design. But it doesn't look like a robust solution.\n> \n> Does it mean that if somebody disables computed_query_id, then\n> pg_stat_statements will not work?\n\nIt depends, but if you mean \"setting up pg_stat_statements, intentionally\ndisabling in-core queryid calculation and not configuring an alternative\nsource\" then yes pg_stat_statements will not work.  But I don't see any\ndifference from \"someone reduce wal_level and complain that replication does\nnot work\" or \"someone disable fsync and complain that data got corrupted\".  We\nprovide a sensible default configuration, you can mess it up if you don't know\nwhat you're doing. \n\n> Why is there the strong dependency between computed_query_id and\n> pg_stat_statements? Can this dependency be just optional?\n\nOnce again no, as it otherwise would mean that postgres unilaterally decides\nthat pg_stat_statements' approach to compute a query identifier is the one and\nonly ultimate truth and nothing else could be useful for anyone.ok. Understand.If I understand well, then computed_query_id does not make sense for pg_stat_statemenst, because this extension always requires it. Cannot be better to use queryid inside pg_stat_statements every time without dependency on computed_query_id? And computed_query_id can be used only for EXPLAIN and for pg_stat_activity.pg_stat_statements cannot work without a queryid, so is useless to speak about configuration. If you use pg_stat_statements, then the queryid will be computed every time, but the visibility will be only for pg_stat_statements. Or a different strategy. I understand so computed_query_id should be active. But I dislike the empty result of pg_stat_statements when computed_query_id is off. Is it possible to raise an exception instead of showing an empty result? The most correct fix from my perspective is just check in function pg_stat_statements if query id is computed or not. If not, and there is no data, then raise an exception with the hint \"enable compute_query_id\". When there is data, then show a warning with the mentioned hint and show data.What do you think about it?Pavel", "msg_date": "Wed, 12 May 2021 09:51:26 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Wed, May 12, 2021 at 09:51:26AM +0200, Pavel Stehule wrote:\n> \n> If I understand well, then computed_query_id does not make sense for\n> pg_stat_statemenst, because this extension always requires it.\n\nNo, pg_stat_statements requires *a* queryid, not specifially *our* queryid.\n\n> Cannot be better to use queryid inside pg_stat_statements every time\n> without dependency on computed_query_id? And computed_query_id can be used\n> only for EXPLAIN and for pg_stat_activity.\n\nNo, because then you will have a discrepancy between those two. And if you\nwant a different queryid approach (say based on object names rather than oid so\nit survives logical replication), then you also want that queryid used for\npg_stat_statements. And that what happen is that you have to fork\npg_stat_statements to only change the queryid implementation, which is one of\nthe thing that the patch to move the implementation to core solves.\n\n> pg_stat_statements cannot work without a queryid, so is useless to speak\n> about configuration. If you use pg_stat_statements, then the queryid will\n> be computed every time, but the visibility will be only for\n> pg_stat_statements.\n\nYes, pg_stat_statements cannot work without a queryid, but it CAN work without\ncore queryid.\n\n> Or a different strategy. I understand so computed_query_id should be\n> active. But I dislike the empty result of pg_stat_statements when\n> computed_query_id is off. Is it possible to raise an exception instead of\n> showing an empty result?\n\nYes, but I don't think that it's a good idea. For instance pg_stat_statements\nwill behave poorly if you have to regularly evict entry. For instance: any\nquery touching a temporary table. One way to avoid that it to avoid storing\nentries that you know are very likely to be eventually evicted.\n\nSo to fix this problem, you have 2 ways to go:\n\n1) fix your app and explicitly disable/enable pg_stat_statements around all\n those queries, and hope you won't miss any\n\n2) write your own queryid implementation to not generate a queryid in such case.\n\n2 seems like a reasonable scenario, and if you force pg_stat_statements to\nerror out in that case then you would be forced to use approach 1.\n\n\n", "msg_date": "Wed, 12 May 2021 16:14:45 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "st 12. 5. 2021 v 10:14 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> On Wed, May 12, 2021 at 09:51:26AM +0200, Pavel Stehule wrote:\n> >\n> > If I understand well, then computed_query_id does not make sense for\n> > pg_stat_statemenst, because this extension always requires it.\n>\n> No, pg_stat_statements requires *a* queryid, not specifially *our* queryid.\n>\n> > Cannot be better to use queryid inside pg_stat_statements every time\n> > without dependency on computed_query_id? And computed_query_id can be\n> used\n> > only for EXPLAIN and for pg_stat_activity.\n>\n> No, because then you will have a discrepancy between those two. And if you\n> want a different queryid approach (say based on object names rather than\n> oid so\n> it survives logical replication), then you also want that queryid used for\n> pg_stat_statements. And that what happen is that you have to fork\n> pg_stat_statements to only change the queryid implementation, which is one\n> of\n> the thing that the patch to move the implementation to core solves.\n>\n> > pg_stat_statements cannot work without a queryid, so is useless to speak\n> > about configuration. If you use pg_stat_statements, then the queryid will\n> > be computed every time, but the visibility will be only for\n> > pg_stat_statements.\n>\n> Yes, pg_stat_statements cannot work without a queryid, but it CAN work\n> without\n> core queryid.\n>\n\n\n\n>\n> > Or a different strategy. I understand so computed_query_id should be\n> > active. But I dislike the empty result of pg_stat_statements when\n> > computed_query_id is off. Is it possible to raise an exception instead of\n> > showing an empty result?\n>\n> Yes, but I don't think that it's a good idea. For instance\n> pg_stat_statements\n> will behave poorly if you have to regularly evict entry. For instance: any\n> query touching a temporary table. One way to avoid that it to avoid\n> storing\n> entries that you know are very likely to be eventually evicted.\n>\n> So to fix this problem, you have 2 ways to go:\n>\n> 1) fix your app and explicitly disable/enable pg_stat_statements around all\n> those queries, and hope you won't miss any\n>\n> 2) write your own queryid implementation to not generate a queryid in such\n> case.\n>\n> 2 seems like a reasonable scenario, and if you force pg_stat_statements to\n> error out in that case then you would be forced to use approach 1.\n>\n\nMy second proposal can work for your example too. pg_stat_statements have\nto require any active queryid computing. And when it is not available, then\nthe exception should be raised.\n\nThe custom queryid can return null, and still the queryid will be computed.\nMaybe the warning can be enough. Just, if somebody use pg_stat_statements\nfunction, then enforce the check if queryid is computed (compute_query_id\nis true || some hook is not null), and if not then raise a warning.\n\nst 12. 5. 2021 v 10:14 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Wed, May 12, 2021 at 09:51:26AM +0200, Pavel Stehule wrote:\n> \n> If I understand well, then computed_query_id does not make sense for\n> pg_stat_statemenst, because this extension always requires it.\n\nNo, pg_stat_statements requires *a* queryid, not specifially *our* queryid.\n\n> Cannot be better to use queryid inside pg_stat_statements every time\n> without dependency on computed_query_id? And computed_query_id can be used\n> only for EXPLAIN and for pg_stat_activity.\n\nNo, because then you will have a discrepancy between those two.  And if you\nwant a different queryid approach (say based on object names rather than oid so\nit survives logical replication), then you also want that queryid used for\npg_stat_statements.  And that what happen is that you have to fork\npg_stat_statements to only change the queryid implementation, which is one of\nthe thing that the patch to move the implementation to core solves.\n\n> pg_stat_statements cannot work without a queryid, so is useless to speak\n> about configuration. If you use pg_stat_statements, then the queryid will\n> be computed every time, but the visibility will be only for\n> pg_stat_statements.\n\nYes, pg_stat_statements cannot work without a queryid, but it CAN work without\ncore queryid. \n\n> Or a different strategy. I understand so computed_query_id should be\n> active. But I dislike the empty result of pg_stat_statements when\n> computed_query_id is off. Is it possible to raise an exception instead of\n> showing an empty result?\n\nYes, but I don't think that it's a good idea.  For instance pg_stat_statements\nwill behave poorly if you have to regularly evict entry.  For instance: any\nquery touching a temporary table.  One way to avoid that it to avoid storing\nentries that you know are very likely to be eventually evicted.\n\nSo to fix this problem, you have 2 ways to go:\n\n1) fix your app and explicitly disable/enable pg_stat_statements around all\n  those queries, and hope you won't miss any\n\n2) write your own queryid implementation to not generate a queryid in such case.\n\n2 seems like a reasonable scenario, and if you force pg_stat_statements to\nerror out in that case then you would be forced to use approach 1.My second proposal can work for your example too. pg_stat_statements have to require any active queryid computing. And when it is not available, then the exception should be raised.The custom queryid can return null, and still the queryid will be computed. Maybe the warning can be enough. Just, if somebody use pg_stat_statements function, then enforce the check if queryid is computed (compute_query_id is true || some hook is not null), and if not then raise a warning.", "msg_date": "Wed, 12 May 2021 10:57:25 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Wed, May 12, 2021 at 10:57:25AM +0200, Pavel Stehule wrote:\n> \n> My second proposal can work for your example too. pg_stat_statements have\n> to require any active queryid computing. And when it is not available, then\n> the exception should be raised.\n> \n> The custom queryid can return null, and still the queryid will be computed.\n> Maybe the warning can be enough. Just, if somebody use pg_stat_statements\n> function, then enforce the check if queryid is computed (compute_query_id\n> is true || some hook is not null), and if not then raise a warning.\n\nAh I'm sorry I misunderstood your proposal. Yes, definitely adding a warning\nor an error when executing pg_stat_statements() SRF would help, that's a great\nidea!\n\nI'll wait a bit in case someone has any objection, and if not send an updated\npatch!\n\n\n", "msg_date": "Wed, 12 May 2021 17:30:26 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "At Wed, 12 May 2021 14:05:16 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> On Wed, May 12, 2021 at 02:33:35PM +0900, Kyotaro Horiguchi wrote:\n> > At Wed, 12 May 2021 10:42:01 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> > > \n> > > I don't think that this approach would cope well for people who want a queryid\n> > > without pg_stat_statements or such. Since the queryid can now be found in\n> > > pg_stat_activity, EXPLAIN output or the logs I think it's entirely reasonable\n> > > to allow users to benefit from that even if they don't install additional\n> > > module.\n> > \n> > Ah, I missed that case. And we are wanting to use pg_stat_statements\n> > with (almost) zero-config? How about the following behavior?\n> > \n> > Setting query_id_provider to 'none' means we don't calculate query-id\n> > by default. However, if queryIdWante() is called, the default provider\n> > is set up and starts calculating query id.\n> \n> Having \"none\" meant \"not unless someone asks for it\" looks like a POLA\n> violation.\n\nSorry for confusion. A different behavior for \"none\" is proposed later\nin the mail. It is just an intermediate discussion.\n\n> > Setting query_id_provider to something else means the user wants\n> > query-id calcualted using the provider. Setting 'default' is\n> > equivalent to setting compute_query_id to 'on'.\n> > \n> > There might be a case where a user sets query_id_provider to\n> > non-'none' but don't want have query-id calculated, but it can be said\n> > a kind of mis-configuration?\n> \n> So if I'm understanding correctly, you're arguing for an approach different to\n> what Michael stated as the general consensus in [1]. I'm not saying that I\n> think it's a bad idea (and I actually suggested it before), but we have to\n> chose an approach and stick with it.\n\nI'm not sure how much room for change of the direction is left. So it\nwas just a proposal. So if the majority still thinks that it is the\nway to stick to controling on/off/(auto) the in-core implement and\nseparately allow another module to be hooked, I don't further object\nto that decision.\n\n> > > I think this would be a mistake to do that, as it would mean that we don't\n> > > officially support alternative queryid provider.\n> > \n> > Ok, if we want to support alternative providers from the first, we\n> > need to actually write the loader code for query-id providers. It\n> > would not be so hard?, but it might not be suitable to this stage so I\n> > proposed that to get rid of needing such complexity for now.\n> \n> I did write a POC extension [2] to demonstrate that moving pg_stat_statement's\n> queryid calculation in core doesn't mean that we're imposing it to everyone.\n> And yes this is critical and a must have in the initial implementation.\n\nOk, understood.\n\n> > (Anyway I prefer to load query-id provider as a dynamically loadable\n> > module rather than hook-function.)\n> \n> I agree that having a specific API (I'm fine with a hook or a dynamically\n> loaded function) for that would be better, but it doesn't appear to be the\n> opinion of the majority.\n\nUgg. Ok.\n\n> > > The GUC itself may not change, but third-party queryid provider would probably\n> > > need changes as the new entry point will be dedicated to compute a queryid\n> > > only, while third-party plugins may do more than that in their\n> > > post_parse_analyze_hook. And also users will have to change their\n> > \n> > I don't think it is not that a problem.\n> \n> Did you mean \"I don't think that it's a problem\"? Otherwise I don't get it.\n\nYes, you're right. Sorry for the typo.\n\n> > Even if any third-party\n> > extension is having query-id generator by itself, in most cases it\n> > would be a copy of JumbleQuery in case of pg_stat_statement is not\n> > loaded and now it is moved in-core as 'default' provider. What the\n> > exntension needs to be done is just ripping out the copied generator\n> > code. I guess...\n> \n> I don't fully understand, but it seems that you're arguing that the only use\n> case is to have something similar to pg_stat_statements (say e.g.\n> pg_store_plans), that always have the same queryid implementation as\n> pg_stat_statements. That's not the case, as there already are \"clones\" of\n> pg_stat_statements, and the main difference is an alternative queryid\n> implementation. So in that case what author would do is to drop everything\n> *except* the queryid implementation.\n> \n> And if I'm not mistaken, pg_store_plans also wants a different queryid\n> implementation, but has to handle a secondary queryid on top of it\n> (https://github.com/ossc-db/pg_store_plans/blob/master/pg_store_plans.c#L843-L855).\n\nYeah, the extension intended to be used joining with the\npg_stat_statements view. And the reason for the second query-id dates\nback to the era when query id was not available in the\npg_stat_statements view. Now it is mere a fall-back query id when\npg_stat_statments is not active. Now that the in-core query-id is\navailable, I think there's no reason to keep that implement.\n\n> So here again what the extension want is to get rid of pg_stat_statements (and\n> now core) queryid implementation.\n\nSo the extension might be a good reason for the discussion^^;\n\n> > > configuration to use that new interface, and additionally the module may now\n> > > have to be removed from shared_preload_libraries. Overall, it doesn't seem to\n> > > me that it would make users' life easier.\n> > \n> > Why the third-party module need to be removed from\n> > shared_preload_libraries? The module can stay as a preloaded shared\n> > library but just no longer need to have its own query-id provider\n> > since it is provided in-core. If the extension required a specific\n> > provider, the developer need to make it a loadable module and users\n> > need to specify the provider module explicitly.\n> \n> It's the same misunderstanding here. Basically people want to benefit from the\n> whole ecosystem based on a queryid (pg_stat_statements, now\n> pg_stat_activity.query_id and such) but with another definition of what a\n> queryid is. So those people will now only need to implement something like\n> [2], rather than forking every single extension they want to use.\n\nHmm. I'm not sure the [2] gives sufficient reason for leaving the\ncurrent interface. But will follow if it is sitll the consensus. (And\nit seems like true.)\n\n> [1]: https://www.postgresql.org/message-id/YJoeXcrwe1EDmqKT@paquier.xyz\n> [2]: https://github.com/rjuju/pg_queryid\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 12 May 2021 18:37:24 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "At Wed, 12 May 2021 17:30:26 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> On Wed, May 12, 2021 at 10:57:25AM +0200, Pavel Stehule wrote:\n> > \n> > My second proposal can work for your example too. pg_stat_statements have\n> > to require any active queryid computing. And when it is not available, then\n> > the exception should be raised.\n> > \n> > The custom queryid can return null, and still the queryid will be computed.\n> > Maybe the warning can be enough. Just, if somebody use pg_stat_statements\n> > function, then enforce the check if queryid is computed (compute_query_id\n> > is true || some hook is not null), and if not then raise a warning.\n> \n> Ah I'm sorry I misunderstood your proposal. Yes, definitely adding a warning\n> or an error when executing pg_stat_statements() SRF would help, that's a great\n> idea!\n> \n> I'll wait a bit in case someone has any objection, and if not send an updated\n> patch!\n\nIsn't there a case where pg_stat_statements uses an alternative\nquery-id provider?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 12 May 2021 18:39:27 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "st 12. 5. 2021 v 11:39 odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nnapsal:\n\n> At Wed, 12 May 2021 17:30:26 +0800, Julien Rouhaud <rjuju123@gmail.com>\n> wrote in\n> > On Wed, May 12, 2021 at 10:57:25AM +0200, Pavel Stehule wrote:\n> > >\n> > > My second proposal can work for your example too. pg_stat_statements\n> have\n> > > to require any active queryid computing. And when it is not available,\n> then\n> > > the exception should be raised.\n> > >\n> > > The custom queryid can return null, and still the queryid will be\n> computed.\n> > > Maybe the warning can be enough. Just, if somebody use\n> pg_stat_statements\n> > > function, then enforce the check if queryid is computed\n> (compute_query_id\n> > > is true || some hook is not null), and if not then raise a warning.\n> >\n> > Ah I'm sorry I misunderstood your proposal. Yes, definitely adding a\n> warning\n> > or an error when executing pg_stat_statements() SRF would help, that's a\n> great\n> > idea!\n> >\n> > I'll wait a bit in case someone has any objection, and if not send an\n> updated\n> > patch!\n>\n> Isn't there a case where pg_stat_statements uses an alternative\n> query-id provider?\n>\n\nthis check just can check if there is \"any\" query-id provider. In this\ncontext is not important if it is buildin or external\n\n\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n\nst 12. 5. 2021 v 11:39 odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com> napsal:At Wed, 12 May 2021 17:30:26 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> On Wed, May 12, 2021 at 10:57:25AM +0200, Pavel Stehule wrote:\n> > \n> > My second proposal can work for your example too. pg_stat_statements have\n> > to require any active queryid computing. And when it is not available, then\n> > the exception should be raised.\n> > \n> > The custom queryid can return null, and still the queryid will be computed.\n> > Maybe the warning can be enough. Just, if somebody use pg_stat_statements\n> > function, then enforce the check if queryid is computed (compute_query_id\n> > is true || some hook is not null), and if not then raise a warning.\n> \n> Ah I'm sorry I misunderstood your proposal.  Yes, definitely adding a warning\n> or an error when executing pg_stat_statements() SRF would help, that's a great\n> idea!\n> \n> I'll wait a bit in case someone has any objection, and if not send an updated\n> patch!\n\nIsn't there a case where pg_stat_statements uses an alternative\nquery-id provider?this check just can check if there is \"any\" query-id provider. In this context is not important if it is buildin or external \n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 12 May 2021 11:42:12 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "At Wed, 12 May 2021 18:39:27 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Wed, 12 May 2021 17:30:26 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> > On Wed, May 12, 2021 at 10:57:25AM +0200, Pavel Stehule wrote:\n> > > \n> > > My second proposal can work for your example too. pg_stat_statements have\n> > > to require any active queryid computing. And when it is not available, then\n> > > the exception should be raised.\n> > > \n> > > The custom queryid can return null, and still the queryid will be computed.\n> > > Maybe the warning can be enough. Just, if somebody use pg_stat_statements\n> > > function, then enforce the check if queryid is computed (compute_query_id\n> > > is true || some hook is not null), and if not then raise a warning.\n> > \n> > Ah I'm sorry I misunderstood your proposal. Yes, definitely adding a warning\n> > or an error when executing pg_stat_statements() SRF would help, that's a great\n> > idea!\n> > \n> > I'll wait a bit in case someone has any objection, and if not send an updated\n> > patch!\n> \n> Isn't there a case where pg_stat_statements uses an alternative\n> query-id provider?\n\nI don't object that if we allow false non-error when an extension that\nuses the hooks but doesn't compute a query id.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 12 May 2021 18:42:31 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Wed, May 12, 2021 at 11:42:12AM +0200, Pavel Stehule wrote:\n> st 12. 5. 2021 v 11:39 odes�latel Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> napsal:\n> \n> > At Wed, 12 May 2021 17:30:26 +0800, Julien Rouhaud <rjuju123@gmail.com>\n> > wrote in\n> > > On Wed, May 12, 2021 at 10:57:25AM +0200, Pavel Stehule wrote:\n> > > >\n> > > > My second proposal can work for your example too. pg_stat_statements\n> > have\n> > > > to require any active queryid computing. And when it is not available,\n> > then\n> > > > the exception should be raised.\n> > > >\n> > > > The custom queryid can return null, and still the queryid will be\n> > computed.\n> > > > Maybe the warning can be enough. Just, if somebody use\n> > pg_stat_statements\n> > > > function, then enforce the check if queryid is computed\n> > (compute_query_id\n> > > > is true || some hook is not null), and if not then raise a warning.\n> > >\n> > > Ah I'm sorry I misunderstood your proposal. Yes, definitely adding a\n> > warning\n> > > or an error when executing pg_stat_statements() SRF would help, that's a\n> > great\n> > > idea!\n> > >\n> > > I'll wait a bit in case someone has any objection, and if not send an\n> > updated\n> > > patch!\n> >\n> > Isn't there a case where pg_stat_statements uses an alternative\n> > query-id provider?\n> >\n> \n> this check just can check if there is \"any\" query-id provider. In this\n> context is not important if it is buildin or external\n\nYes, the idea is that if you execute \"SELECT * FROM pg_stat_statements\" or\nsimilar, then if the executing query itself doesn't have a queryid then\nit's very likely that you didn't configure compute_query_id or an alternative\nquery_id implementation properly. And loudly complaining seems like the right\nthing to do.\n\n\n", "msg_date": "Wed, 12 May 2021 17:51:49 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Wed, May 12, 2021 at 06:37:24PM +0900, Kyotaro Horiguchi wrote:\n> At Wed, 12 May 2021 14:05:16 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> > \n> > And if I'm not mistaken, pg_store_plans also wants a different queryid\n> > implementation, but has to handle a secondary queryid on top of it\n> > (https://github.com/ossc-db/pg_store_plans/blob/master/pg_store_plans.c#L843-L855).\n> \n> Yeah, the extension intended to be used joining with the\n> pg_stat_statements view. And the reason for the second query-id dates\n> back to the era when query id was not available in the\n> pg_stat_statements view. Now it is mere a fall-back query id when\n> pg_stat_statments is not active. Now that the in-core query-id is\n> available, I think there's no reason to keep that implement.\n> \n> > So here again what the extension want is to get rid of pg_stat_statements (and\n> > now core) queryid implementation.\n> \n> So the extension might be a good reason for the discussion^^;\n\nIndeed. So IIUC, what pg_store_plans wants is:\n\n- to use its own query_id implementation\n- to be able to be joined to pg_stat_statements\n\nIs that correct?\n\nIf yes, it seems that starting with pg14, it can be easily achieved by:\n\n- documenting to disable compute_query_id\n- eventually error out at execution time if it's enabled\n- don't call queryIdWanted()\n- expose its query_id\n\nIt will then work just fine, and will be more efficient compared to what is\ndone today as only one queryid will be calculated.\n\nDid I miss something?\n\n\n", "msg_date": "Wed, 12 May 2021 18:09:30 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Wed, May 12, 2021 at 05:30:26PM +0800, Julien Rouhaud wrote:\n> On Wed, May 12, 2021 at 10:57:25AM +0200, Pavel Stehule wrote:\n> > \n> > My second proposal can work for your example too. pg_stat_statements have\n> > to require any active queryid computing. And when it is not available, then\n> > the exception should be raised.\n> > \n> > The custom queryid can return null, and still the queryid will be computed.\n> > Maybe the warning can be enough. Just, if somebody use pg_stat_statements\n> > function, then enforce the check if queryid is computed (compute_query_id\n> > is true || some hook is not null), and if not then raise a warning.\n> \n> Ah I'm sorry I misunderstood your proposal. Yes, definitely adding a warning\n> or an error when executing pg_stat_statements() SRF would help, that's a great\n> idea!\n> \n> I'll wait a bit in case someone has any objection, and if not send an updated\n> patch!\n\nHearing no complaint, PFA a v2 implementing such a warning. Here's an\nextract from the updated regression tests:\n\n-- Check that pg_stat_statements() will complain if the configuration appears\n-- to be broken.\nSET compute_query_id = off;\nSELECT pg_stat_statements_reset();\n pg_stat_statements_reset \n--------------------------\n \n(1 row)\n\nSELECT count(*) FROM pg_stat_statements;\nWARNING: Query identifier calculation seems to be disabled\nHINT: If you don't want to use a third-party module to compute query identifiers, you may want to enable compute_query_id\n count \n-------\n 0\n(1 row)\n\n\nI'm of course open to suggestions for some better wording.", "msg_date": "Thu, 13 May 2021 08:26:23 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Wed, May 12, 2021 at 05:51:49PM +0800, Julien Rouhaud wrote:\n> On Wed, May 12, 2021 at 11:42:12AM +0200, Pavel Stehule wrote:\n> > this check just can check if there is \"any\" query-id provider. In this\n> > context is not important if it is buildin or external\n> \n> Yes, the idea is that if you execute \"SELECT * FROM pg_stat_statements\" or\n> similar, then if the executing query itself doesn't have a queryid then\n> it's very likely that you didn't configure compute_query_id or an alternative\n> query_id implementation properly. And loudly complaining seems like the right\n> thing to do.\n\nI understand the desire to make pg_stat_statements require minimal\nconfiguration, but frankly, if the server-side variable query id API is\nconfusing, I think we have done more harm than good.\n\nThe problem with compute_query_id=auto is that there is no way to know\nif the query id is actually enabled, unless you guess from the installed\nextensions, or we add another variable to report that, and maybe another\nvariable to control the provier, unless we require turning\ncompute_query_id=off if you are using custom query id computation. What\nif it is auto, and pg_stat_statments is installed, and you want to use a\ncustom query id computation --- what happens? As you can see, this is\nall becoming very complicated.\n\nI think we might be just as well to go with compute_query_id=on/off, and\njust complain loudly from CREATE EXTENSION, or in the server logs on\nserver start via shared_preload_libraries, or when querying\npg_stat_statements system view. We simply say to change\ncompute_query_id=on or to provide a custom query id implementation.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 12 May 2021 20:36:18 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Wed, May 12, 2021 at 08:36:18PM -0400, Bruce Momjian wrote:\n> On Wed, May 12, 2021 at 05:51:49PM +0800, Julien Rouhaud wrote:\n> > On Wed, May 12, 2021 at 11:42:12AM +0200, Pavel Stehule wrote:\n> > > this check just can check if there is \"any\" query-id provider. In this\n> > > context is not important if it is buildin or external\n> > \n> > Yes, the idea is that if you execute \"SELECT * FROM pg_stat_statements\" or\n> > similar, then if the executing query itself doesn't have a queryid then\n> > it's very likely that you didn't configure compute_query_id or an alternative\n> > query_id implementation properly. And loudly complaining seems like the right\n> > thing to do.\n> \n> I understand the desire to make pg_stat_statements require minimal\n> configuration, but frankly, if the server-side variable query id API is\n> confusing, I think we have done more harm than good.\n> \n> The problem with compute_query_id=auto is that there is no way to know\n> if the query id is actually enabled, unless you guess from the installed\n> extensions, or we add another variable to report that, and maybe another\n> variable to control the provier, unless we require turning\n> compute_query_id=off if you are using custom query id computation. What\n> if it is auto, and pg_stat_statments is installed, and you want to use a\n> custom query id computation --- what happens? As you can see, this is\n> all becoming very complicated.\n\nWell, as implemented you can get the value of compute_query_id, and if it's\nstill \"auto\" then it's not enabled as calling queryIdWanted() would turn it to\non. I agree that it's not ideal but you have a way to know. We could document\nthat auto means that it's set to auto and no one asked to automatically enabled\nit.\n\nOr you can just do e.g.\n\nSELECT query_id FROM pg_stat_activity WHERE pid = pg_backend_pid();\n\nand see if you have a query_id or not.\n\nIf you want to use third-party modules, they you have to explicitly disable\ncompute_query_id. If you don't, every query execution will raise an error as\nwe documented that third-party modules should error out if they see that a\nquery_id is already generated. Such module could also explicitly check that\ncompute_query_id is off and also raise an error if that's not the case.\n\n> I think we might be just as well to go with compute_query_id=on/off, and\n> just complain loudly from CREATE EXTENSION, or in the server logs on\n> server start via shared_preload_libraries, or when querying\n> pg_stat_statements system view. We simply say to change\n> compute_query_id=on or to provide a custom query id implementation.\n\nI'm not opposed to that, but it was already suggested and apparently people\ndidn't like that approach.\n\n\n", "msg_date": "Thu, 13 May 2021 08:52:36 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "At Wed, 12 May 2021 18:09:30 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> On Wed, May 12, 2021 at 06:37:24PM +0900, Kyotaro Horiguchi wrote:\n> > At Wed, 12 May 2021 14:05:16 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> > > \n> > > And if I'm not mistaken, pg_store_plans also wants a different queryid\n> > > implementation, but has to handle a secondary queryid on top of it\n> > > (https://github.com/ossc-db/pg_store_plans/blob/master/pg_store_plans.c#L843-L855).\n> > \n> > Yeah, the extension intended to be used joining with the\n> > pg_stat_statements view. And the reason for the second query-id dates\n> > back to the era when query id was not available in the\n> > pg_stat_statements view. Now it is mere a fall-back query id when\n> > pg_stat_statments is not active. Now that the in-core query-id is\n> > available, I think there's no reason to keep that implement.\n> > \n> > > So here again what the extension want is to get rid of pg_stat_statements (and\n> > > now core) queryid implementation.\n> > \n> > So the extension might be a good reason for the discussion^^;\n> \n> Indeed. So IIUC, what pg_store_plans wants is:\n\nUgg. Very sorry. My brain should need more oxygen, or caffeine.. The\nlast sentence forgetting a negation. The plugin does not need a\nspecial query-id provider so the special provider can be removed\nwithout problems if the core provides one.\n\n> - to use its own query_id implementation\n> - to be able to be joined to pg_stat_statements\n> \n> Is that correct?\n\nIt is correct, but a bit short in detail.\n\nThe query_id of its own is provided because pg_stat_statements did not\nexpose query_id. And it has been preserved only for the case the\nplugin is used without pg_stat_statements activated. Now that the\nin-core query_id is available, the last reason for the special\nprovider has gone.\n\n> If yes, it seems that starting with pg14, it can be easily achieved by:\n\nSo, it would be a bit different.\n\n> - documenting to disable compute_query_id\n\n documenting to *not disable* compute_query_id. That is set it to on\n or auto.\n\n> - eventually error out at execution time if it's enabled\n\nSo. the extension would check if any query_id provider *is* active.\n\n> - don't call queryIdWanted()\n> - expose its query_id\n> \n> It will then work just fine, and will be more efficient compared to what is\n> done today as only one queryid will be calculated.\n\nAfter reading Magnus' comment nearby, I realized that my most\nsignificant concern here is how to know any query_id provider is\nactive. The way of setting the hook cannot enforce notifying that\nkind of things on plugins. For me implementing them as a dll looked as\none of the most promising way of enabling that without needing any\nboiler-plates.\n\nAnother not-prefect (in that it needs a boiler-plate) but workable way\nis letting query-id providers set some variable including GUC\nexplicitly as Magnus' suggested. GUC would be better in that it is\nnaturally observable from users.\n\nEven though there'a possibility that a developer of a query_id\nprovider forgets to set it, but maybe it would be easily\nnoticeable. On the other hand it gives a sure means to know any\nquery_id provider is active.\n\nHow about adding a GUC_INTERNAL \"current_query_provider\" or such?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 13 May 2021 09:59:43 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "At Wed, 12 May 2021 20:36:18 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> On Wed, May 12, 2021 at 05:51:49PM +0800, Julien Rouhaud wrote:\n> > On Wed, May 12, 2021 at 11:42:12AM +0200, Pavel Stehule wrote:\n> > > this check just can check if there is \"any\" query-id provider. In this\n> > > context is not important if it is buildin or external\n> > \n> > Yes, the idea is that if you execute \"SELECT * FROM pg_stat_statements\" or\n> > similar, then if the executing query itself doesn't have a queryid then\n> > it's very likely that you didn't configure compute_query_id or an alternative\n> > query_id implementation properly. And loudly complaining seems like the right\n> > thing to do.\n> \n> I understand the desire to make pg_stat_statements require minimal\n> configuration, but frankly, if the server-side variable query id API is\n> confusing, I think we have done more harm than good.\n> \n> The problem with compute_query_id=auto is that there is no way to know\n> if the query id is actually enabled, unless you guess from the installed\n> extensions, or we add another variable to report that, and maybe another\n> variable to control the provier, unless we require turning\n> compute_query_id=off if you are using custom query id computation. What\n> if it is auto, and pg_stat_statments is installed, and you want to use a\n> custom query id computation --- what happens? As you can see, this is\n> all becoming very complicated.\n> \n> I think we might be just as well to go with compute_query_id=on/off, and\n> just complain loudly from CREATE EXTENSION, or in the server logs on\n> server start via shared_preload_libraries, or when querying\n> pg_stat_statements system view. We simply say to change\n> compute_query_id=on or to provide a custom query id implementation.\n\nFWIW, I personally am fine with that (ignoring details :p), that is,\nleaving the whole responsibility of a sane setup to users. If we are\ngoing to automate even a part of it, I think we need to make it\nperfect at least to a certain level. The current auery_id = auto\nlooks like somewhat halfway, or narrow-ranged.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 13 May 2021 10:12:37 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 08:52:36AM +0800, Julien Rouhaud wrote:\n> On Wed, May 12, 2021 at 08:36:18PM -0400, Bruce Momjian wrote:\n> > The problem with compute_query_id=auto is that there is no way to know\n> > if the query id is actually enabled, unless you guess from the installed\n> > extensions, or we add another variable to report that, and maybe another\n> > variable to control the provier, unless we require turning\n> > compute_query_id=off if you are using custom query id computation. What\n> > if it is auto, and pg_stat_statments is installed, and you want to use a\n> > custom query id computation --- what happens? As you can see, this is\n> > all becoming very complicated.\n> \n> Well, as implemented you can get the value of compute_query_id, and if it's\n> still \"auto\" then it's not enabled as calling queryIdWanted() would turn it to\n> on. I agree that it's not ideal but you have a way to know. We could document\n> that auto means that it's set to auto and no one asked to automatically enabled\n> it.\n\nWow, so the extension changes it? How do we record the \"source\" of that\nchange? Do we have other GUCs that do that?\n\n> Or you can just do e.g.\n> \n> SELECT query_id FROM pg_stat_activity WHERE pid = pg_backend_pid();\n> \n> and see if you have a query_id or not.\n\nTrue.\n\n> If you want to use third-party modules, they you have to explicitly disable\n> compute_query_id. If you don't, every query execution will raise an error as\n> we documented that third-party modules should error out if they see that a\n> query_id is already generated. Such module could also explicitly check that\n> compute_query_id is off and also raise an error if that's not the case.\n\nOK.\n\n> > I think we might be just as well to go with compute_query_id=on/off, and\n> > just complain loudly from CREATE EXTENSION, or in the server logs on\n> > server start via shared_preload_libraries, or when querying\n> > pg_stat_statements system view. We simply say to change\n> > compute_query_id=on or to provide a custom query id implementation.\n> \n> I'm not opposed to that, but it was already suggested and apparently people\n> didn't like that approach.\n\nAlso probably true.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 12 May 2021 21:13:25 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 09:59:43AM +0900, Kyotaro Horiguchi wrote:\n> \n> The query_id of its own is provided because pg_stat_statements did not\n> expose query_id. And it has been preserved only for the case the\n> plugin is used without pg_stat_statements activated. Now that the\n> in-core query_id is available, the last reason for the special\n> provider has gone.\n\nAh I see, indeed that makes sense. However I'm assuming that pg_store_plans\nalso requires *a* queryid, not specifically what used to be pg_stat_statements'\none right, so it could also fallback on an alternative implementation if users\nconfigured one? Even if that's not the case, the core query_id can still be\ncalculated if needed as the function is now exported.\n\n\n", "msg_date": "Thu, 13 May 2021 09:42:28 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "At Thu, 13 May 2021 09:59:43 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> How about adding a GUC_INTERNAL \"current_query_provider\" or such?\n\nOn the second thought, I wonder why we don't just call JumbleQuery in\npgss_post_parse_analyze when compute_query_id is \"off\".\n\nWe can think this behavior as the following.\n\n- compute_query_id sets whether the *internal* query-id provider turn\n on. If it is \"off\", query_id in , for example, pg_stat_activity is\n not set. Even in that case it is set to some valid value if some\n alternative query-id provider is active.\n\nOn the other hand pg_stat_statements looks as if providing\n\"alternative\" query-id provider, but actually it is just calling\nin-core JumbleQuery if not called yet.\n\n\n@@ -830,6 +830,10 @@ pgss_post_parse_analyze(ParseState *pstate, Query *query, JumbleState *jstate)\n \t\treturn;\n \t}\n \n+\t/* Call in-core JumbleQuery if it was not called in-core */\n+\tif (!jstate)\n+\t\tjstate = JumbleQuery(query, pstate->p_sourcetext);\n+\n \t/*\n\nAny plugins that want to use its own query-id generator can WARN if\njstate is not NULL, but also can proceed ignoring the exisint jstate.\n\nWARNING: the default query-id provier is active, turn off compute_query_id to avoid unnecessary calculation\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 13 May 2021 10:51:52 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Wed, May 12, 2021 at 09:13:25PM -0400, Bruce Momjian wrote:\n> On Thu, May 13, 2021 at 08:52:36AM +0800, Julien Rouhaud wrote:\n> > \n> > Well, as implemented you can get the value of compute_query_id, and if it's\n> > still \"auto\" then it's not enabled as calling queryIdWanted() would turn it to\n> > on. I agree that it's not ideal but you have a way to know. We could document\n> > that auto means that it's set to auto and no one asked to automatically enabled\n> > it.\n> \n> Wow, so the extension changes it?\n\nYes. It seemed better to go this way rather than having a secondary read-only\nGUC for that.\n\n> How do we record the \"source\" of that\n> change? Do we have other GUCs that do that?\n\nNo, we don't. But I don't know what exactly you would like to have as a\nsource? What if you have for instance pg_stat_statements, pg_stat_kcache,\npg_store_plans and pg_wait_sampling installed? All those extensions need a\nquery_id (or at least benefit from it for pg_wait_sampling), is there any value\nto give a full list of all the modules that would enable compute_query_id?\n\nI'm assuming that anyone wanting to install any of those extensions (or any\nsimilar one) is fully aware that they aggregate metrics based on at least a\nquery_id. If they don't, well they probably never read any documentation since\npostgres 9.2 which introduced query normalization, and I doubt that they will\nbe interested in having access to the information anyway.\n\n\n", "msg_date": "Thu, 13 May 2021 09:57:00 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 10:51:52AM +0900, Kyotaro Horiguchi wrote:\n> At Thu, 13 May 2021 09:59:43 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > How about adding a GUC_INTERNAL \"current_query_provider\" or such?\n> \n> On the second thought, I wonder why we don't just call JumbleQuery in\n> pgss_post_parse_analyze when compute_query_id is \"off\".\n\nBecause not generating a query_id for a custom query_id implementation is a\nvalid use case for queries that are known to lead to huge pg_stat_statements\noverhead, as I mentioned in [1]. For the record I implemented that in\npg_queryid (optionally don't generate query_id for queries referencing a temp\nrelation) yesterday evening as a POC for that approach.\n\n[1]: https://www.postgresql.org/message-id/20210512081445.axosz3xf7ydrhe7o@nol\n\n\n", "msg_date": "Thu, 13 May 2021 10:02:45 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "At Thu, 13 May 2021 10:02:45 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> On Thu, May 13, 2021 at 10:51:52AM +0900, Kyotaro Horiguchi wrote:\n> > At Thu, 13 May 2021 09:59:43 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > > How about adding a GUC_INTERNAL \"current_query_provider\" or such?\n> > \n> > On the second thought, I wonder why we don't just call JumbleQuery in\n> > pgss_post_parse_analyze when compute_query_id is \"off\".\n> \n> Because not generating a query_id for a custom query_id implementation is a\n> valid use case for queries that are known to lead to huge pg_stat_statements\n> overhead, as I mentioned in [1]. For the record I implemented that in\n> pg_queryid (optionally don't generate query_id for queries referencing a temp\n> relation) yesterday evening as a POC for that approach.\n\nYes, I know. So I said that \"if not yet called\". I believe any \"real\"\nalternative query-id provider is supposed to be hooked \"before\"\npg_stat_statements. (It is a kind of magic to control the order of\nplugins, though..) When the alternative provider generated a query_id\n(that is, it has set jstate), pg_stat_statment doesn't call the\nin-core JumbleQuery and uses the givin query_id.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 13 May 2021 11:26:29 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "At Thu, 13 May 2021 11:26:29 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 13 May 2021 10:02:45 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> > On Thu, May 13, 2021 at 10:51:52AM +0900, Kyotaro Horiguchi wrote:\n> > > At Thu, 13 May 2021 09:59:43 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > > > How about adding a GUC_INTERNAL \"current_query_provider\" or such?\n> > > \n> > > On the second thought, I wonder why we don't just call JumbleQuery in\n> > > pgss_post_parse_analyze when compute_query_id is \"off\".\n> > \n> > Because not generating a query_id for a custom query_id implementation is a\n> > valid use case for queries that are known to lead to huge pg_stat_statements\n> > overhead, as I mentioned in [1]. For the record I implemented that in\n> > pg_queryid (optionally don't generate query_id for queries referencing a temp\n> > relation) yesterday evening as a POC for that approach.\n> \n> Yes, I know. So I said that \"if not yet called\". I believe any \"real\"\n> alternative query-id provider is supposed to be hooked \"before\"\n> pg_stat_statements. (It is a kind of magic to control the order of\n> plugins, though..) When the alternative provider generated a query_id\n> (that is, it has set jstate), pg_stat_statment doesn't call the\n> in-core JumbleQuery and uses the givin query_id.\n\nForgot to mention, I think that the state \"query_id provider is active\nbut it has not assigned one to this query\" can be signaled by\njstate=<non-null> and query_id = 0.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 13 May 2021 11:30:56 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 11:30:56AM +0900, Kyotaro Horiguchi wrote:\n> At Thu, 13 May 2021 11:26:29 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > At Thu, 13 May 2021 10:02:45 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> > > On Thu, May 13, 2021 at 10:51:52AM +0900, Kyotaro Horiguchi wrote:\n> > > > At Thu, 13 May 2021 09:59:43 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > > > > How about adding a GUC_INTERNAL \"current_query_provider\" or such?\n> > > > \n> > > > On the second thought, I wonder why we don't just call JumbleQuery in\n> > > > pgss_post_parse_analyze when compute_query_id is \"off\".\n> > > \n> > > Because not generating a query_id for a custom query_id implementation is a\n> > > valid use case for queries that are known to lead to huge pg_stat_statements\n> > > overhead, as I mentioned in [1]. For the record I implemented that in\n> > > pg_queryid (optionally don't generate query_id for queries referencing a temp\n> > > relation) yesterday evening as a POC for that approach.\n> > \n> > Yes, I know. So I said that \"if not yet called\". I believe any \"real\"\n> > alternative query-id provider is supposed to be hooked \"before\"\n> > pg_stat_statements. (It is a kind of magic to control the order of\n> > plugins, though..) When the alternative provider generated a query_id\n> > (that is, it has set jstate), pg_stat_statment doesn't call the\n> > in-core JumbleQuery and uses the givin query_id.\n> \n> Forgot to mention, I think that the state \"query_id provider is active\n> but it has not assigned one to this query\" can be signaled by\n> jstate=<non-null> and query_id = 0.\n\nI assume that you mean \"third-party query_id provider\" here, as the core one\nwill always return a non-zero query_id?\n\nI guess it could work, but a lot of people are complaining that having\ncompute_query_id = [ off | on | auto ] is too confusing, so I don't see how\nhaving \"off\" means \"sometimes off, sometimes on\" is going to be any clearer for\nusers.\n\n\n", "msg_date": "Thu, 13 May 2021 10:39:20 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 11:26:29AM +0900, Kyotaro Horiguchi wrote:\n> \n> I believe any \"real\"\n> alternative query-id provider is supposed to be hooked \"before\"\n> pg_stat_statements. (It is a kind of magic to control the order of\n> plugins, though..\n\nIndeed, you have to configure shared_preload_libraries depending on whether\neach module calls the previous post_parse_analyze_hook before or after its own\nprocessing, and that's the main reason why I think a dedicated entry point\nwould be better.\n\n\n", "msg_date": "Thu, 13 May 2021 10:43:03 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "At Thu, 13 May 2021 10:39:20 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> On Thu, May 13, 2021 at 11:30:56AM +0900, Kyotaro Horiguchi wrote:\n> > At Thu, 13 May 2021 11:26:29 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > > At Thu, 13 May 2021 10:02:45 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> > > Yes, I know. So I said that \"if not yet called\". I believe any \"real\"\n> > > alternative query-id provider is supposed to be hooked \"before\"\n> > > pg_stat_statements. (It is a kind of magic to control the order of\n> > > plugins, though..) When the alternative provider generated a query_id\n> > > (that is, it has set jstate), pg_stat_statment doesn't call the\n> > > in-core JumbleQuery and uses the givin query_id.\n> > \n> > Forgot to mention, I think that the state \"query_id provider is active\n> > but it has not assigned one to this query\" can be signaled by\n> > jstate=<non-null> and query_id = 0.\n> \n> I assume that you mean \"third-party query_id provider\" here, as the core one\n> will always return a non-zero query_id?\n\nRight.\n\n> I guess it could work, but a lot of people are complaining that having\n> compute_query_id = [ off | on | auto ] is too confusing, so I don't see how\n> having \"off\" means \"sometimes off, sometimes on\" is going to be any clearer for\n> users.\n\nI don't get it. It read as \"people are complaining the tristate is too\nconfusing, so I made it tristate\"?\n\nFor the second point, so I said that the variable controls whether the\n\"internal\" query-id pvovider turn on. It is more clearer if the name\nwere something like \"use_internal_query_id_generator\".\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 13 May 2021 11:49:34 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 11:49:34AM +0900, Kyotaro Horiguchi wrote:\n> At Thu, 13 May 2021 10:39:20 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> > On Thu, May 13, 2021 at 11:30:56AM +0900, Kyotaro Horiguchi wrote:\n> > > At Thu, 13 May 2021 11:26:29 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > > > At Thu, 13 May 2021 10:02:45 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> > > > Yes, I know. So I said that \"if not yet called\". I believe any \"real\"\n> > > > alternative query-id provider is supposed to be hooked \"before\"\n> > > > pg_stat_statements. (It is a kind of magic to control the order of\n> > > > plugins, though..) When the alternative provider generated a query_id\n> > > > (that is, it has set jstate), pg_stat_statment doesn't call the\n> > > > in-core JumbleQuery and uses the givin query_id.\n> > > \n> > > Forgot to mention, I think that the state \"query_id provider is active\n> > > but it has not assigned one to this query\" can be signaled by\n> > > jstate=<non-null> and query_id = 0.\n> > \n> > I assume that you mean \"third-party query_id provider\" here, as the core one\n> > will always return a non-zero query_id?\n> \n> Right.\n> \n> > I guess it could work, but a lot of people are complaining that having\n> > compute_query_id = [ off | on | auto ] is too confusing, so I don't see how\n> > having \"off\" means \"sometimes off, sometimes on\" is going to be any clearer for\n> > users.\n> \n> I don't get it. It read as \"people are complaining the tristate is too\n> confusing, so I made it tristate\"?\n\nNo, the consensus was for having a tristate, so I implemented it, and now\npeople are complaining that it's too confusing.\n\n> For the second point, so I said that the variable controls whether the\n> \"internal\" query-id pvovider turn on. It is more clearer if the name\n> were something like \"use_internal_query_id_generator\".\n\nI don't see how it's really different. If I understand correctly, you're\nsuggesting that\nuse_internal_query_id_generator = off\ncan mean either\n\n- off\n- on if pg_stat_statements or similar extension is configured but no custom\n query_id provider is configured, and in any case it will always be displayed\n as off\n\nwith no other new GUC. Is that correct?\n\n\n", "msg_date": "Thu, 13 May 2021 10:59:24 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 09:57:00AM +0800, Julien Rouhaud wrote:\n> On Wed, May 12, 2021 at 09:13:25PM -0400, Bruce Momjian wrote:\n> > On Thu, May 13, 2021 at 08:52:36AM +0800, Julien Rouhaud wrote:\n> > > \n> > > Well, as implemented you can get the value of compute_query_id, and if it's\n> > > still \"auto\" then it's not enabled as calling queryIdWanted() would turn it to\n> > > on. I agree that it's not ideal but you have a way to know. We could document\n> > > that auto means that it's set to auto and no one asked to automatically enabled\n> > > it.\n> > \n> > Wow, so the extension changes it?\n> \n> Yes. It seemed better to go this way rather than having a secondary read-only\n> GUC for that.\n> \n> > How do we record the \"source\" of that\n> > change? Do we have other GUCs that do that?\n> \n> No, we don't. But I don't know what exactly you would like to have as a\n\nOK.\n\n> source? What if you have for instance pg_stat_statements, pg_stat_kcache,\n> pg_store_plans and pg_wait_sampling installed? All those extensions need a\n> query_id (or at least benefit from it for pg_wait_sampling), is there any value\n> to give a full list of all the modules that would enable compute_query_id?\n\nWell, we don't have any other cases where the source of the change is\nthis indeterminate, so I don't really have a suggestion. I think this\ndoes highlight another case where 'auto' just isn't a good fit for our\nAPI.\n\n> I'm assuming that anyone wanting to install any of those extensions (or any\n> similar one) is fully aware that they aggregate metrics based on at least a\n> query_id. If they don't, well they probably never read any documentation since\n> postgres 9.2 which introduced query normalization, and I doubt that they will\n> be interested in having access to the information anyway.\n\nMy point is that we are changing a setting from an extension, and if you\nlook in pg_setting, it will say \"default\"?\n\nIf the user already has to edit postgresql.conf to set\nshared_preload_libraries, I still don't see why having them set\ncompute_query_id at the same time is a significant problem and a reason\nto distort our API to do 'auto'.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 12 May 2021 23:06:52 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "At Thu, 13 May 2021 10:43:03 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> On Thu, May 13, 2021 at 11:26:29AM +0900, Kyotaro Horiguchi wrote:\n> > \n> > I believe any \"real\"\n> > alternative query-id provider is supposed to be hooked \"before\"\n> > pg_stat_statements. (It is a kind of magic to control the order of\n> > plugins, though..\n> \n> Indeed, you have to configure shared_preload_libraries depending on whether\n> each module calls the previous post_parse_analyze_hook before or after its own\n> processing, and that's the main reason why I think a dedicated entry point\n> would be better.\n\nI see it as cleaner than the status-quo. (But still believing less\ncleaner than DLL:p, since the same problem happens if two\nquery_id-generating modules are competing on the new hook ponit.).\n\nYou told that a special query-id provider needed to be separated to\nanother DLL, but a preload shared librarie is also a dll and I think\nany exported function in it can be obtained via\nload_external_function().\n\nAs the result, even if we take the DLL approach, still not need to\nsplit out the query-id provider part. By the following config:\n\n> query_id_provider = 'pg_stat_statements'\n\nthe core can obtain the entrypoint of, say, \"_PG_calculate_query_id\"\nto call it. And it can be of another module.\n\nIt seems like the only problem doing that is we don't have a means to\ncall per-process intializer for a preload libralies.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 13 May 2021 12:11:12 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Wed, May 12, 2021 at 11:06:52PM -0400, Bruce Momjian wrote:\n> On Thu, May 13, 2021 at 09:57:00AM +0800, Julien Rouhaud wrote:\n> \n> > source? What if you have for instance pg_stat_statements, pg_stat_kcache,\n> > pg_store_plans and pg_wait_sampling installed? All those extensions need a\n> > query_id (or at least benefit from it for pg_wait_sampling), is there any value\n> > to give a full list of all the modules that would enable compute_query_id?\n> \n> Well, we don't have any other cases where the source of the change is\n> this indeterminate, so I don't really have a suggestion. I think this\n> does highlight another case where 'auto' just isn't a good fit for our\n> API.\n\nIt may depends on your next question\n\n> > I'm assuming that anyone wanting to install any of those extensions (or any\n> > similar one) is fully aware that they aggregate metrics based on at least a\n> > query_id. If they don't, well they probably never read any documentation since\n> > postgres 9.2 which introduced query normalization, and I doubt that they will\n> > be interested in having access to the information anyway.\n> \n> My point is that we are changing a setting from an extension, and if you\n> look in pg_setting, it will say \"default\"?\n\nNo, it will say \"on\". What the patch I sent implements is:\n\n- compute_query_id = on means it was either explicitly set to on, or\n automatically set to on if it was allowed to (so initially set to auto). It\n means you know whether core query_id calculation is enabled or not, you can\n know looking at the reset value it it was changed by an extension, you just\n don't know which one(s).\n\n- compute_query_id = auto means that it can be set to on, it just wasn't yet,\n so it's off, and may change if you have an extension that can be dynamically\n loaded and request for core query_id calculation to be enabled\n\n- compute_query_id = off means it's off\n\n> If the user already has to edit postgresql.conf to set\n> shared_preload_libraries, I still don't see why having them set\n> compute_query_id at the same time is a significant problem and a reason\n> to distort our API to do 'auto'.\n\nLooking at the arguments until now my understanding is that it's because \"no\none will read the doc anyway\".\n\n\n", "msg_date": "Thu, 13 May 2021 11:16:13 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 12:11:12PM +0900, Kyotaro Horiguchi wrote:\n> At Thu, 13 May 2021 10:43:03 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> > On Thu, May 13, 2021 at 11:26:29AM +0900, Kyotaro Horiguchi wrote:\n> > > \n> > > I believe any \"real\"\n> > > alternative query-id provider is supposed to be hooked \"before\"\n> > > pg_stat_statements. (It is a kind of magic to control the order of\n> > > plugins, though..\n> > \n> > Indeed, you have to configure shared_preload_libraries depending on whether\n> > each module calls the previous post_parse_analyze_hook before or after its own\n> > processing, and that's the main reason why I think a dedicated entry point\n> > would be better.\n> \n> I see it as cleaner than the status-quo. (But still believing less\n> cleaner than DLL:p, since the same problem happens if two\n> query_id-generating modules are competing on the new hook ponit.).\n> \n> You told that a special query-id provider needed to be separated to\n> another DLL\n\nNo, I'm saying a different entry point. It can be a new hook or an explicit\nfunction name called for a dynamically loaded function, I'm fine with both as\nlong as it's called before post_parse_analyze_hook.\n\n> It seems like the only problem doing that is we don't have a means to\n> call per-process intializer for a preload libralies.\n\nBut that's going to happen only once per backend? If it's still adding too\nmuch overhead you could add the module in shared_preload_libraries.\n\n\n", "msg_date": "Thu, 13 May 2021 11:23:17 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 11:16:13AM +0800, Julien Rouhaud wrote:\n> On Wed, May 12, 2021 at 11:06:52PM -0400, Bruce Momjian wrote:\n> > On Thu, May 13, 2021 at 09:57:00AM +0800, Julien Rouhaud wrote:\n> > \n> > > source? What if you have for instance pg_stat_statements, pg_stat_kcache,\n> > > pg_store_plans and pg_wait_sampling installed? All those extensions need a\n> > > query_id (or at least benefit from it for pg_wait_sampling), is there any value\n> > > to give a full list of all the modules that would enable compute_query_id?\n> > \n> > Well, we don't have any other cases where the source of the change is\n> > this indeterminate, so I don't really have a suggestion. I think this\n> > does highlight another case where 'auto' just isn't a good fit for our\n> > API.\n> \n> It may depends on your next question\n> \n> > > I'm assuming that anyone wanting to install any of those extensions (or any\n> > > similar one) is fully aware that they aggregate metrics based on at least a\n> > > query_id. If they don't, well they probably never read any documentation since\n> > > postgres 9.2 which introduced query normalization, and I doubt that they will\n> > > be interested in having access to the information anyway.\n> > \n> > My point is that we are changing a setting from an extension, and if you\n> > look in pg_setting, it will say \"default\"?\n> \n> No, it will say \"on\". What the patch I sent implements is:\n\nI was asking what pg_settings.source will say:\n\n\tSELECT name, soource FROM pg_settings;\n\n> - compute_query_id = on means it was either explicitly set to on, or\n> automatically set to on if it was allowed to (so initially set to auto). It\n> means you know whether core query_id calculation is enabled or not, you can\n> know looking at the reset value it it was changed by an extension, you just\n> don't know which one(s).\n> \n> - compute_query_id = auto means that it can be set to on, it just wasn't yet,\n> so it's off, and may change if you have an extension that can be dynamically\n> loaded and request for core query_id calculation to be enabled\n\nSo, it is 'off', but not set to 'off' in the GUC sense --- just off as\nin not being computed. You can see the confusion in my just reading\nthat sentence.\n\n> - compute_query_id = off means it's off\n> \n> > If the user already has to edit postgresql.conf to set\n> > shared_preload_libraries, I still don't see why having them set\n> > compute_query_id at the same time is a significant problem and a reason\n> > to distort our API to do 'auto'.\n> \n> Looking at the arguments until now my understanding is that it's because \"no\n> one will read the doc anyway\".\n\nHow do they know to set shared_preload_libraries then? We change the\nuser API all the time, especially in GUCs, and even rename them, but for\nsome reason we don't think people using pg_stat_statements can be\ntrusted to read the release notes and change their behavior. I just\ndon't get it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 12 May 2021 23:33:32 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Wed, May 12, 2021 at 11:33:32PM -0400, Bruce Momjian wrote:\n> On Thu, May 13, 2021 at 11:16:13AM +0800, Julien Rouhaud wrote:\n> > On Wed, May 12, 2021 at 11:06:52PM -0400, Bruce Momjian wrote:\n> > > On Thu, May 13, 2021 at 09:57:00AM +0800, Julien Rouhaud wrote:\n> > > \n> > > > source? What if you have for instance pg_stat_statements, pg_stat_kcache,\n> > > > pg_store_plans and pg_wait_sampling installed? All those extensions need a\n> > > > query_id (or at least benefit from it for pg_wait_sampling), is there any value\n> > > > to give a full list of all the modules that would enable compute_query_id?\n> > > \n> > > Well, we don't have any other cases where the source of the change is\n> > > this indeterminate, so I don't really have a suggestion. I think this\n> > > does highlight another case where 'auto' just isn't a good fit for our\n> > > API.\n> > \n> > It may depends on your next question\n> > \n> > > > I'm assuming that anyone wanting to install any of those extensions (or any\n> > > > similar one) is fully aware that they aggregate metrics based on at least a\n> > > > query_id. If they don't, well they probably never read any documentation since\n> > > > postgres 9.2 which introduced query normalization, and I doubt that they will\n> > > > be interested in having access to the information anyway.\n> > > \n> > > My point is that we are changing a setting from an extension, and if you\n> > > look in pg_setting, it will say \"default\"?\n> > \n> > No, it will say \"on\". What the patch I sent implements is:\n> \n> I was asking what pg_settings.source will say:\n> \n> \tSELECT name, soource FROM pg_settings;\n\nOh sorry. Here's the full output before and after a dynamic call to\nqueryIsWanted():\n\nname | compute_query_id\nsetting | auto\nunit | <NULL>\ncategory | Statistics / Monitoring\nshort_desc | Compute query identifiers.\nextra_desc | <NULL>\ncontext | superuser\nvartype | enum\nsource | default\nmin_val | <NULL>\nmax_val | <NULL>\nenumvals | {auto,on,off}\nboot_val | auto\nreset_val | auto\nsourcefile | <NULL>\nsourceline | <NULL>\npending_restart | f\n\n=# LOAD 'pg_qualstats'; /* will call queryIsWanted() */\nWARNING: 01000: Without shared_preload_libraries, only current backend stats will be available.\nLOAD\n\nname | compute_query_id\nsetting | on\nunit | <NULL>\ncategory | Statistics / Monitoring\nshort_desc | Compute query identifiers.\nextra_desc | <NULL>\ncontext | superuser\nvartype | enum\nsource | default\nmin_val | <NULL>\nmax_val | <NULL>\nenumvals | {auto,on,off}\nboot_val | auto\nreset_val | auto\nsourcefile | <NULL>\nsourceline | <NULL>\npending_restart | f\n\nSo yes, it says \"default\" (and it's the same if the change is done during\npostmaster init). It can be fixed if that's the only issue.\n\n> \n> > - compute_query_id = on means it was either explicitly set to on, or\n> > automatically set to on if it was allowed to (so initially set to auto). It\n> > means you know whether core query_id calculation is enabled or not, you can\n> > know looking at the reset value it it was changed by an extension, you just\n> > don't know which one(s).\n> > \n> > - compute_query_id = auto means that it can be set to on, it just wasn't yet,\n> > so it's off, and may change if you have an extension that can be dynamically\n> > loaded and request for core query_id calculation to be enabled\n> \n> So, it is 'off', but not set to 'off' in the GUC sense --- just off as\n> in not being computed. You can see the confusion in my just reading\n> that sentence.\n\nIt's technically not \"off\" but \"not on yet\", but that's probably just making it\nworse :)\n\n> How do they know to set shared_preload_libraries then? We change the\n> user API all the time, especially in GUCs, and even rename them, but for\n> some reason we don't think people using pg_stat_statements can be\n> trusted to read the release notes and change their behavior. I just\n> don't get it.\n\nI don't know what to say. So here is a summary of the complaints that I'm\naware of:\n\n- https://www.postgresql.org/message-id/1953aec168224b95b0c962a622bef0794da6ff40.camel@moonset.ru\nThat was only a couple of days after the commit just before the feature freeze,\nso it may be the less relevant complaint.\n\n- https://www.postgresql.org/message-id/CAOxo6XJEYunL71g0yD-zRzNRRqBG0Ssw-ARygy5pGRdSjK5YLQ%40mail.gmail.com\nDid a git bisect to find the commit that changed the behavior and somehow\ndidn't notice the new setting\n\n- this thread, with Fuji-san saying:\n\n> I'm afraid that users may easily forget to enable compute_query_id when using\n> pg_stat_statements (because this setting was not necessary in v13 or before)\n\n- this thread, with Peter E. saying:\n\n> Now there is the additional burden of turning on this weird setting that\n> no one understands. That's a 50% increase in burden. And almost no one will\n> want to use a nondefault setting. pg_stat_statements is pretty popular. I\n> think leaving in this requirement will lead to widespread confusion and\n> complaints.\n\n- this thread, with Pavel saying:\n\n> Until now, the pg_stat_statements was zero-config. So the change is not user\n> friendly.\n\nSo it's a mix of \"it's changing something that didn't change in a long time\"\nand \"it's adding extra footgun and/or burden as it's not doing by default what\nthe majority of users will want\", with an overwhelming majority of people\nsupporting the \"we don't want that extra burden\".\n\n\n", "msg_date": "Thu, 13 May 2021 12:03:42 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "\n\nOn 2021/05/13 13:03, Julien Rouhaud wrote:\n> On Wed, May 12, 2021 at 11:33:32PM -0400, Bruce Momjian wrote:\n>> On Thu, May 13, 2021 at 11:16:13AM +0800, Julien Rouhaud wrote:\n>>> On Wed, May 12, 2021 at 11:06:52PM -0400, Bruce Momjian wrote:\n>>>> On Thu, May 13, 2021 at 09:57:00AM +0800, Julien Rouhaud wrote:\n>>>>\n>>>>> source? What if you have for instance pg_stat_statements, pg_stat_kcache,\n>>>>> pg_store_plans and pg_wait_sampling installed? All those extensions need a\n>>>>> query_id (or at least benefit from it for pg_wait_sampling), is there any value\n>>>>> to give a full list of all the modules that would enable compute_query_id?\n>>>>\n>>>> Well, we don't have any other cases where the source of the change is\n>>>> this indeterminate, so I don't really have a suggestion. I think this\n>>>> does highlight another case where 'auto' just isn't a good fit for our\n>>>> API.\n>>>\n>>> It may depends on your next question\n>>>\n>>>>> I'm assuming that anyone wanting to install any of those extensions (or any\n>>>>> similar one) is fully aware that they aggregate metrics based on at least a\n>>>>> query_id. If they don't, well they probably never read any documentation since\n>>>>> postgres 9.2 which introduced query normalization, and I doubt that they will\n>>>>> be interested in having access to the information anyway.\n>>>>\n>>>> My point is that we are changing a setting from an extension, and if you\n>>>> look in pg_setting, it will say \"default\"?\n>>>\n>>> No, it will say \"on\". What the patch I sent implements is:\n>>\n>> I was asking what pg_settings.source will say:\n>>\n>> \tSELECT name, soource FROM pg_settings;\n> \n> Oh sorry. Here's the full output before and after a dynamic call to\n> queryIsWanted():\n> \n> name | compute_query_id\n> setting | auto\n> unit | <NULL>\n> category | Statistics / Monitoring\n> short_desc | Compute query identifiers.\n> extra_desc | <NULL>\n> context | superuser\n> vartype | enum\n> source | default\n> min_val | <NULL>\n> max_val | <NULL>\n> enumvals | {auto,on,off}\n> boot_val | auto\n> reset_val | auto\n> sourcefile | <NULL>\n> sourceline | <NULL>\n> pending_restart | f\n> \n> =# LOAD 'pg_qualstats'; /* will call queryIsWanted() */\n> WARNING: 01000: Without shared_preload_libraries, only current backend stats will be available.\n> LOAD\n> \n> name | compute_query_id\n> setting | on\n> unit | <NULL>\n> category | Statistics / Monitoring\n> short_desc | Compute query identifiers.\n> extra_desc | <NULL>\n> context | superuser\n> vartype | enum\n> source | default\n> min_val | <NULL>\n> max_val | <NULL>\n> enumvals | {auto,on,off}\n> boot_val | auto\n> reset_val | auto\n> sourcefile | <NULL>\n> sourceline | <NULL>\n> pending_restart | f\n> \n> So yes, it says \"default\" (and it's the same if the change is done during\n> postmaster init). It can be fixed if that's the only issue.\n\nI like leaving compute_query_id=auto instead of overwriting it to \"on\",\neven when queryIsWanted() is called, as I told upthread. Then we can decide\nthat query id computation is necessary when compute_query_id=auto and\nthe special flag is enabled by queryIsWanted(). Of course as you and Magnus\ndiscussed upthread, the issue in EXEC_BACKEND case should be fixed,\nmaybe by using save_backend_variables() or something, though.\n\nIf we do this, compute_query_id=auto seems to be similar to huge_pages=try.\nWhen huge_pages=try, whether huge pages is actually used is defined depending\noutside PostgreSQL (i.e, OS setting in this case). Neither pg_settings.setting nor\nsouce are not changed in that case.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 13 May 2021 13:18:05 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Le jeu. 13 mai 2021 à 12:18, Fujii Masao <masao.fujii@oss.nttdata.com> a\nécrit :\n\n>\n> I like leaving compute_query_id=auto instead of overwriting it to \"on\",\n> even when queryIsWanted() is called, as I told upthread. Then we can decide\n> that query id computation is necessary when compute_query_id=auto and\n> the special flag is enabled by queryIsWanted(). Of course as you and Magnus\n> discussed upthread, the issue in EXEC_BACKEND case should be fixed,\n> maybe by using save_backend_variables() or something, though.\n>\n> If we do this, compute_query_id=auto seems to be similar to huge_pages=try.\n> When huge_pages=try, whether huge pages is actually used is defined\n> depending\n> outside PostgreSQL (i.e, OS setting in this case). Neither\n> pg_settings.setting nor\n> souce are not changed in that case.\n>\n\nI'm fine with that, but a lot of people complained that it wasn't good\nbecause you don't really know if it's actually on or not. I personally\ndon't think that it's an issue, because what user want is to\nautomagumically do what they want, not check how the magic happened, and if\nthey want a third party implementation then the module can error out if the\nsetting is on, so the burden will only be for those users, and handled by\nthe third party module author.\n\n>\n\nLe jeu. 13 mai 2021 à 12:18, Fujii Masao <masao.fujii@oss.nttdata.com> a écrit :\n\nI like leaving compute_query_id=auto instead of overwriting it to \"on\",\neven when queryIsWanted() is called, as I told upthread. Then we can decide\nthat query id computation is necessary when compute_query_id=auto and\nthe special flag is enabled by queryIsWanted(). Of course as you and Magnus\ndiscussed upthread, the issue in EXEC_BACKEND case should be fixed,\nmaybe by using save_backend_variables() or something, though.\n\nIf we do this, compute_query_id=auto seems to be similar to huge_pages=try.\nWhen huge_pages=try, whether huge pages is actually used is defined depending\noutside PostgreSQL (i.e, OS setting in this case). Neither pg_settings.setting nor\nsouce are not changed in that case.I'm fine with that, but a lot of people complained that it wasn't good because you don't really know if it's actually on or not. I personally don't think that it's an issue, because what user want is to automagumically do what they want, not check how the magic happened, and if they want a third party implementation then the module can error out if the setting is on, so the burden will only be for those users, and handled by the third party module author.", "msg_date": "Thu, 13 May 2021 12:25:43 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "At Thu, 13 May 2021 12:11:12 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> As the result, even if we take the DLL approach, still not need to\n> split out the query-id provider part. By the following config:\n> \n> > query_id_provider = 'pg_stat_statements'\n> \n> the core can obtain the entrypoint of, say, \"_PG_calculate_query_id\"\n> to call it. And it can be of another module.\n> \n> It seems like the only problem doing that is we don't have a means to\n> call per-process intializer for a preload libralies.\n\nSo this is a crude PoC of that.\n\npg_stat_statemnts defines its own query-id provider function in\npg_stat_statements which calls in-core DefaultJumbeQuery (end emits a\nlog line).\n\nIf server started with query_id_provider='auto' and pg_stat_statements\nis not loaded, pg_stat_activity.query_id is null.\n\nIf query_id_provider='auto' and pg_stat_statements is loaded,\npg_stat_activity.query_id is filled in with a query id.\n\nIf query_id_provider='default' or 'pg_stat_statements' and\npg_stat_statements is not loaded, pg_stat_activity.query_id is filled\nin with a query id.\n\nIf query_id_provider='none' and pg_stat_statements is loaded, it\ncomplains as \"query id provider is not available\" and refuss to start.\n\nIf showing the variable, it shows the real provider name instead of\nthe setting in postgresql.conf.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\ndiff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c\nindex a85f962801..207c4362af 100644\n--- a/contrib/pg_stat_statements/pg_stat_statements.c\n+++ b/contrib/pg_stat_statements/pg_stat_statements.c\n@@ -295,6 +295,7 @@ static bool pgss_save;\t\t\t/* whether to save stats across shutdown */\n \n void\t\t_PG_init(void);\n void\t\t_PG_fini(void);\n+JumbleState *_PG_calculate_query_id(Query *query, const char *querytext);\n \n PG_FUNCTION_INFO_V1(pg_stat_statements_reset);\n PG_FUNCTION_INFO_V1(pg_stat_statements_reset_1_7);\n@@ -478,6 +479,13 @@ _PG_fini(void)\n \tProcessUtility_hook = prev_ProcessUtility;\n }\n \n+/* Test queryid provider function */\n+JumbleState *_PG_calculate_query_id(Query *query, const char *querytext)\n+{\n+\telog(LOG, \"Called query id generatr of pg_stat_statements\");\n+\treturn DefaultJumbleQuery(query, querytext);\n+}\n+\n /*\n * shmem_startup hook: allocate or attach to shared memory,\n * then load any pre-existing statistics from file.\n@@ -544,6 +552,11 @@ pgss_shmem_startup(void)\n \tif (!IsUnderPostmaster)\n \t\ton_shmem_exit(pgss_shmem_shutdown, (Datum) 0);\n \n+\t/* request my defalt provider, but allow exisint one */\n+\tif (!queryIdWanted(\"pg_stat_statements\", true))\n+\t\tereport(ERROR,\n+\t\t\t\t(errmsg (\"query_id provider is not available\")));\n+\n \t/*\n \t * Done if some other process already completed our initialization.\n \t */\ndiff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c\nindex 1202bf85a3..be00564221 100644\n--- a/src/backend/commands/explain.c\n+++ b/src/backend/commands/explain.c\n@@ -245,8 +245,7 @@ ExplainQuery(ParseState *pstate, ExplainStmt *stmt,\n \tes->summary = (summary_set) ? es->summary : es->analyze;\n \n \tquery = castNode(Query, stmt->query);\n-\tif (compute_query_id)\n-\t\tjstate = JumbleQuery(query, pstate->p_sourcetext);\n+\tjstate = JumbleQuery(query, pstate->p_sourcetext);\n \n \tif (post_parse_analyze_hook)\n \t\t(*post_parse_analyze_hook) (pstate, query, jstate);\ndiff --git a/src/backend/parser/analyze.c b/src/backend/parser/analyze.c\nindex 168198acd1..bdf3a5a6d1 100644\n--- a/src/backend/parser/analyze.c\n+++ b/src/backend/parser/analyze.c\n@@ -124,8 +124,7 @@ parse_analyze(RawStmt *parseTree, const char *sourceText,\n \n \tquery = transformTopLevelStmt(pstate, parseTree);\n \n-\tif (compute_query_id)\n-\t\tjstate = JumbleQuery(query, sourceText);\n+\tjstate = JumbleQuery(query, sourceText);\n \n \tif (post_parse_analyze_hook)\n \t\t(*post_parse_analyze_hook) (pstate, query, jstate);\n@@ -163,8 +162,7 @@ parse_analyze_varparams(RawStmt *parseTree, const char *sourceText,\n \t/* make sure all is well with parameter types */\n \tcheck_variable_parameters(pstate, query);\n \n-\tif (compute_query_id)\n-\t\tjstate = JumbleQuery(query, sourceText);\n+\tjstate = JumbleQuery(query, sourceText);\n \n \tif (post_parse_analyze_hook)\n \t\t(*post_parse_analyze_hook) (pstate, query, jstate);\ndiff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c\nindex 6200699ddd..1034dfea28 100644\n--- a/src/backend/tcop/postgres.c\n+++ b/src/backend/tcop/postgres.c\n@@ -704,8 +704,7 @@ pg_analyze_and_rewrite_params(RawStmt *parsetree,\n \n \tquery = transformTopLevelStmt(pstate, parsetree);\n \n-\tif (compute_query_id)\n-\t\tjstate = JumbleQuery(query, query_string);\n+\tjstate = JumbleQuery(query, query_string);\n \n \tif (post_parse_analyze_hook)\n \t\t(*post_parse_analyze_hook) (pstate, query, jstate);\ndiff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c\nindex eb7f7181e4..70d06b825e 100644\n--- a/src/backend/utils/misc/guc.c\n+++ b/src/backend/utils/misc/guc.c\n@@ -101,6 +101,7 @@\n #include \"utils/plancache.h\"\n #include \"utils/portal.h\"\n #include \"utils/ps_status.h\"\n+#include \"utils/queryjumble.h\"\n #include \"utils/rls.h\"\n #include \"utils/snapmgr.h\"\n #include \"utils/tzparser.h\"\n@@ -534,7 +535,6 @@ extern const struct config_enum_entry dynamic_shared_memory_options[];\n /*\n * GUC option variables that are exported from this module\n */\n-bool\t\tcompute_query_id = false;\n bool\t\tlog_duration = false;\n bool\t\tDebug_print_plan = false;\n bool\t\tDebug_print_parse = false;\n@@ -1441,15 +1441,6 @@ static struct config_bool ConfigureNamesBool[] =\n \t\ttrue,\n \t\tNULL, NULL, NULL\n \t},\n-\t{\n-\t\t{\"compute_query_id\", PGC_SUSET, STATS_MONITORING,\n-\t\t\tgettext_noop(\"Compute query identifiers.\"),\n-\t\t\tNULL\n-\t\t},\n-\t\t&compute_query_id,\n-\t\tfalse,\n-\t\tNULL, NULL, NULL\n-\t},\n \t{\n \t\t{\"log_parser_stats\", PGC_SUSET, STATS_MONITORING,\n \t\t\tgettext_noop(\"Writes parser performance statistics to the server log.\"),\n@@ -4579,6 +4570,16 @@ static struct config_string ConfigureNamesString[] =\n \t\tcheck_backtrace_functions, assign_backtrace_functions, NULL\n \t},\n \n+\n+\t{\n+\t\t{\"query_id_provider\", PGC_SUSET, CLIENT_CONN_PRELOAD,\n+\t\t\tgettext_noop(\"Sets the query-id provider.\"),\n+\t\t},\n+\t\t&query_id_provider,\n+\t\t\"auto\",\n+\t\tcheck_query_id_provider, assign_query_id_provider, NULL\n+\t},\n+\n \t/* End-of-list marker */\n \t{\n \t\t{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL\ndiff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample\nindex efde01ee56..fc31ce15c4 100644\n--- a/src/backend/utils/misc/postgresql.conf.sample\n+++ b/src/backend/utils/misc/postgresql.conf.sample\n@@ -604,7 +604,7 @@\n \n # - Monitoring -\n \n-#compute_query_id = off\n+#query_id_provider = 'auto'\n #log_statement_stats = off\n #log_parser_stats = off\n #log_planner_stats = off\ndiff --git a/src/backend/utils/misc/queryjumble.c b/src/backend/utils/misc/queryjumble.c\nindex f004a9ce8c..709d654ea0 100644\n--- a/src/backend/utils/misc/queryjumble.c\n+++ b/src/backend/utils/misc/queryjumble.c\n@@ -47,6 +47,96 @@ static void JumbleRangeTable(JumbleState *jstate, List *rtable);\n static void JumbleRowMarks(JumbleState *jstate, List *rowMarks);\n static void JumbleExpr(JumbleState *jstate, Node *node);\n static void RecordConstLocation(JumbleState *jstate, int location);\n+static JumbleState *DummyJumbleQuery(Query *query, const char *querytext);\n+\n+char *query_id_provider = NULL;\n+static bool lock_provider = false;\n+\n+typedef JumbleState *(*JumbleQueryType) (Query *query, const char *querytext);\n+JumbleQueryType JumbleQuery = NULL;\n+\n+typedef struct QueryIdProviderExtra\n+{\n+\tJumbleQueryType pfunc;\n+} QueryIdProviderExtra;\n+\n+bool\n+check_query_id_provider(char **newval, void **extra, GucSource source)\n+{\n+\tQueryIdProviderExtra *param = NULL;\n+\n+\tif (lock_provider)\n+\t\treturn false;\n+\n+\tparam = (QueryIdProviderExtra *)malloc(sizeof(QueryIdProviderExtra));\n+\tif (param == NULL)\n+\t\treturn false;\n+\n+\tif (strcmp(*newval, \"none\") == 0 ||\n+\t\tstrcmp(*newval, \"auto\") == 0)\n+\t\tparam->pfunc = DummyJumbleQuery;\n+\telse if (strcmp(*newval, \"default\") == 0)\n+\t\tparam->pfunc = DefaultJumbleQuery;\n+\telse\n+\t\tparam->pfunc =\n+\t\t\tload_external_function(*newval, \"_PG_calculate_query_id\",\n+\t\t\t\t\t\t\t\t false, NULL);\n+\n+\tif (param->pfunc == NULL)\n+\t{\n+\t\tfree(param);\n+\t\tparam = NULL;\n+\t\tGUC_check_errdetail(\"failed to load query id provider\");\n+\t\treturn false;\n+\t}\n+\n+\t*extra = (void *) param;\n+\treturn true;\n+}\n+\n+void\n+assign_query_id_provider(const char *newval, void *extra)\n+{\n+\tQueryIdProviderExtra *param = (QueryIdProviderExtra *)extra;\n+\n+\tJumbleQuery = param->pfunc;\n+}\n+\n+bool\n+queryIdWanted(char *provider_name, bool use_existing)\n+{\n+\tJumbleQueryType func;\n+\n+\tAssert(query_id_provider != NULL);\n+\tAssert(JumbleQuery != NULL);\n+\n+\tif (lock_provider || strcmp(query_id_provider, \"none\") == 0)\n+\t\treturn false;\n+\n+\t/* use existing provider when use_existing */\n+\tif (strcmp(query_id_provider, \"auto\") != 0 && use_existing)\n+\t\treturn true;\n+\n+\tif (strcmp(provider_name, \"default\") == 0)\n+\t\tfunc = DefaultJumbleQuery;\n+\telse\n+\t\tfunc = load_external_function(provider_name, \"_PG_calculate_query_id\",\n+\t\t\t\t\t\t\t\t\t false, NULL);\n+\n+\tif (func == NULL)\n+\t\treturn false;\n+\n+\telog(LOG, \"query-id provider \\\"%s\\\" loaded\", provider_name);\n+\tJumbleQuery = func;\n+\n+\t/* expose real provider name */\n+\tSetConfigOption(\"query_id_provider\", provider_name,\n+\t\t\t\t\tPGC_POSTMASTER, PGC_S_OVERRIDE);\n+\n+\tlock_provider = true;\n+\n+\treturn true;\n+}\n \n /*\n * Given a possibly multi-statement source string, confine our attention to the\n@@ -92,7 +182,7 @@ CleanQuerytext(const char *query, int *location, int *len)\n }\n \n JumbleState *\n-JumbleQuery(Query *query, const char *querytext)\n+DefaultJumbleQuery(Query *query, const char *querytext)\n {\n \tJumbleState *jstate = NULL;\n \n@@ -132,6 +222,12 @@ JumbleQuery(Query *query, const char *querytext)\n \treturn jstate;\n }\n \n+static JumbleState *\n+DummyJumbleQuery(Query *query, const char *querytext)\n+{\n+\treturn NULL;\n+}\n+\n /*\n * Compute a query identifier for the given utility query string.\n */\ndiff --git a/src/include/utils/guc.h b/src/include/utils/guc.h\nindex 24a5d9d3fb..a7c3a4958e 100644\n--- a/src/include/utils/guc.h\n+++ b/src/include/utils/guc.h\n@@ -247,7 +247,6 @@ extern bool log_btree_build_stats;\n extern PGDLLIMPORT bool check_function_bodies;\n extern bool session_auth_is_superuser;\n \n-extern bool compute_query_id;\n extern bool log_duration;\n extern int\tlog_parameter_max_length;\n extern int\tlog_parameter_max_length_on_error;\ndiff --git a/src/include/utils/queryjumble.h b/src/include/utils/queryjumble.h\nindex 83ba7339fa..682d687b79 100644\n--- a/src/include/utils/queryjumble.h\n+++ b/src/include/utils/queryjumble.h\n@@ -15,6 +15,7 @@\n #define QUERYJUBLE_H\n \n #include \"nodes/parsenodes.h\"\n+#include \"utils/guc.h\"\n \n #define JUMBLE_SIZE\t\t\t\t1024\t/* query serialization buffer size */\n \n@@ -52,7 +53,12 @@ typedef struct JumbleState\n \tint\t\t\thighest_extern_param_id;\n } JumbleState;\n \n+extern char *query_id_provider;\n+extern JumbleState *(*JumbleQuery)(Query *query, const char *querytext);\n+\n+JumbleState *DefaultJumbleQuery(Query *query, const char *querytext);\n+bool queryIdWanted(char *provider_name, bool use_existing);\n+bool check_query_id_provider(char **newval, void **extra, GucSource source);\n+void assign_query_id_provider(const char *newval, void *extra);\n const char *CleanQuerytext(const char *query, int *location, int *len);\n-JumbleState *JumbleQuery(Query *query, const char *querytext);\n-\n #endif\t\t\t\t\t\t\t/* QUERYJUMBLE_H */\ndiff --git a/src/test/regress/expected/explain.out b/src/test/regress/expected/explain.out\nindex cda28098ba..16375d5596 100644\n--- a/src/test/regress/expected/explain.out\n+++ b/src/test/regress/expected/explain.out\n@@ -477,7 +477,7 @@ select jsonb_pretty(\n (1 row)\n \n rollback;\n-set compute_query_id = on;\n+set compute_query_id = 'default';\n select explain_filter('explain (verbose) select * from int8_tbl i8');\n explain_filter \n ----------------------------------------------------------------", "msg_date": "Thu, 13 May 2021 13:26:37 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "At Thu, 13 May 2021 13:26:37 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 13 May 2021 12:11:12 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > As the result, even if we take the DLL approach, still not need to\n> > split out the query-id provider part. By the following config:\n> > \n> > > query_id_provider = 'pg_stat_statements'\n> > \n> > the core can obtain the entrypoint of, say, \"_PG_calculate_query_id\"\n> > to call it. And it can be of another module.\n> > \n> > It seems like the only problem doing that is we don't have a means to\n> > call per-process intializer for a preload libralies.\n> \n> So this is a crude PoC of that.\n> \n> pg_stat_statemnts defines its own query-id provider function in\n> pg_stat_statements which calls in-core DefaultJumbeQuery (end emits a\n> log line).\n> \n> If server started with query_id_provider='auto' and pg_stat_statements\n> is not loaded, pg_stat_activity.query_id is null.\n> \n> If query_id_provider='auto' and pg_stat_statements is loaded,\n> pg_stat_activity.query_id is filled in with a query id.\n> \n> If query_id_provider='default' or 'pg_stat_statements' and\n> pg_stat_statements is not loaded, pg_stat_activity.query_id is filled\n> in with a query id.\n> \n> If query_id_provider='none' and pg_stat_statements is loaded, it\n> complains as \"query id provider is not available\" and refuss to start.\n> \n> If showing the variable, it shows the real provider name instead of\n> the setting in postgresql.conf.\n\nThe change contains needless things that tries to handle per-backend\nchange case, so it would be simpler assuming we don't want on-the-fly\nchange of provider (and I believe so since that change surely causes\ninconsistency)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 13 May 2021 13:32:09 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Le jeu. 13 mai 2021 à 12:26, Kyotaro Horiguchi <horikyota.ntt@gmail.com> a\nécrit :\n\n> At Thu, 13 May 2021 12:11:12 +0900 (JST), Kyotaro Horiguchi <\n> horikyota.ntt@gmail.com> wrote in\n> > As the result, even if we take the DLL approach, still not need to\n> > split out the query-id provider part. By the following config:\n> >\n> > > query_id_provider = 'pg_stat_statements'\n> >\n> > the core can obtain the entrypoint of, say, \"_PG_calculate_query_id\"\n> > to call it. And it can be of another module.\n> >\n> > It seems like the only problem doing that is we don't have a means to\n> > call per-process intializer for a preload libralies.\n>\n> So this is a crude PoC of that.\n>\n> pg_stat_statemnts defines its own query-id provider function in\n> pg_stat_statements which calls in-core DefaultJumbeQuery (end emits a\n> log line).\n>\n> If server started with query_id_provider='auto' and pg_stat_statements\n> is not loaded, pg_stat_activity.query_id is null.\n>\n> If query_id_provider='auto' and pg_stat_statements is loaded,\n> pg_stat_activity.query_id is filled in with a query id.\n>\n> If query_id_provider='default' or 'pg_stat_statements' and\n> pg_stat_statements is not loaded, pg_stat_activity.query_id is filled\n> in with a query id.\n>\n> If query_id_provider='none' and pg_stat_statements is loaded, it\n> complains as \"query id provider is not available\" and refuss to start.\n>\n> If showing the variable, it shows the real provider name instead of\n> the setting in postgresql.conf.\n>\n\nwhat if you want to have some other extensions like pg_stat_kcache or\npg_store_plans that need a query_id but don't really care if\npg_stat_statements is enabled or not? should they all declare their own\nwrapper too? should the system complain or silently ignore when they all\ntry to install their provider function?\n\n>\n\nLe jeu. 13 mai 2021 à 12:26, Kyotaro Horiguchi <horikyota.ntt@gmail.com> a écrit :At Thu, 13 May 2021 12:11:12 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> As the result, even if we take the DLL approach, still not need to\n> split out the query-id provider part. By the following config:\n> \n> > query_id_provider = 'pg_stat_statements'\n> \n> the core can obtain the entrypoint of, say, \"_PG_calculate_query_id\"\n> to call it. And it can be of another module.\n> \n> It seems like the only problem doing that is we don't have a means to\n> call per-process intializer for a preload libralies.\n\nSo this is a crude PoC of that.\n\npg_stat_statemnts defines its own query-id provider function in\npg_stat_statements which calls in-core DefaultJumbeQuery (end emits a\nlog line).\n\nIf server started with query_id_provider='auto' and pg_stat_statements\nis not loaded, pg_stat_activity.query_id is null.\n\nIf query_id_provider='auto' and pg_stat_statements is loaded,\npg_stat_activity.query_id is filled in with a query id.\n\nIf query_id_provider='default' or 'pg_stat_statements' and\npg_stat_statements is not loaded, pg_stat_activity.query_id is filled\nin with a query id.\n\nIf query_id_provider='none' and pg_stat_statements is loaded, it\ncomplains as \"query id provider is not available\" and refuss to start.\n\nIf showing the variable, it shows the real provider name instead of\nthe setting in postgresql.conf.what if you want to have some other extensions like pg_stat_kcache or pg_store_plans that need a query_id but don't really care if pg_stat_statements is enabled or not? should they all declare their own wrapper too? should the system complain or silently ignore when they all try to install their provider function?", "msg_date": "Thu, 13 May 2021 12:33:47 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Wed, May 12, 2021 at 9:03 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Wed, May 12, 2021 at 11:33:32PM -0400, Bruce Momjian wrote:\n> > How do they know to set shared_preload_libraries then? We change the\n> > user API all the time, especially in GUCs, and even rename them, but for\n> > some reason we don't think people using pg_stat_statements can be\n> > trusted to read the release notes and change their behavior. I just\n> > don't get it.\n>\n> I don't know what to say. So here is a summary of the complaints that I'm\n> aware of:\n>\n> - https://www.postgresql.org/message-id/1953aec168224b95b0c962a622bef0794da6ff40.camel@moonset.ru\n> That was only a couple of days after the commit just before the feature freeze,\n> so it may be the less relevant complaint.\n>\n> - https://www.postgresql.org/message-id/CAOxo6XJEYunL71g0yD-zRzNRRqBG0Ssw-ARygy5pGRdSjK5YLQ%40mail.gmail.com\n> Did a git bisect to find the commit that changed the behavior and somehow\n> didn't notice the new setting\n>\n> - this thread, with Fuji-san saying:\n>\n> > I'm afraid that users may easily forget to enable compute_query_id when using\n> > pg_stat_statements (because this setting was not necessary in v13 or before)\n>\n> - this thread, with Peter E. saying:\n>\n> > Now there is the additional burden of turning on this weird setting that\n> > no one understands. That's a 50% increase in burden. And almost no one will\n> > want to use a nondefault setting. pg_stat_statements is pretty popular. I\n> > think leaving in this requirement will lead to widespread confusion and\n> > complaints.\n>\n> - this thread, with Pavel saying:\n>\n> > Until now, the pg_stat_statements was zero-config. So the change is not user\n> > friendly.\n>\n> So it's a mix of \"it's changing something that didn't change in a long time\"\n> and \"it's adding extra footgun and/or burden as it's not doing by default what\n> the majority of users will want\", with an overwhelming majority of people\n> supporting the \"we don't want that extra burden\".\n\nFor what it's worth, I don't think the actual changing of an extra\nsetting is that big a burden: it's the figuring out that you need to\nchange it, and how you should configure it, that is the problem.\nEspecially since all major search engines still seem to return 9.4 (!)\ndocumentation as the first hit for a \"pg_stat_statements\" search. The\ncommon case (installing pg_stat_statements but not tweaking query id\ngeneration) should be simple.\n\n\n", "msg_date": "Wed, 12 May 2021 21:52:34 -0700", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Le jeu. 13 mai 2021 à 12:52, Maciek Sakrejda <m.sakrejda@gmail.com> a\nécrit :\n\n>\n> For what it's worth, I don't think the actual changing of an extra\n> setting is that big a burden: it's the figuring out that you need to\n> change it, and how you should configure it, that is the problem.\n> Especially since all major search engines still seem to return 9.4 (!)\n> documentation as the first hit for a \"pg_stat_statements\" search. The\n> common case (installing pg_stat_statements but not tweaking query id\n> generation) should be simple.\n>\n\nthe v2 patch I sent should address both your requirements.\n\n>\n\nLe jeu. 13 mai 2021 à 12:52, Maciek Sakrejda <m.sakrejda@gmail.com> a écrit :\n\nFor what it's worth, I don't think the actual changing of an extra\nsetting is that big a burden: it's the figuring out that you need to\nchange it, and how you should configure it, that is the problem.\nEspecially since all major search engines still seem to return 9.4 (!)\ndocumentation as the first hit for a \"pg_stat_statements\" search. The\ncommon case (installing pg_stat_statements but not tweaking query id\ngeneration) should be simple.the v2 patch I sent should address both your requirements.", "msg_date": "Thu, 13 May 2021 12:58:41 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Wed, May 12, 2021 at 9:58 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> Le jeu. 13 mai 2021 à 12:52, Maciek Sakrejda <m.sakrejda@gmail.com> a écrit :\n>>\n>> For what it's worth, I don't think the actual changing of an extra\n>> setting is that big a burden: it's the figuring out that you need to\n>> change it, and how you should configure it, that is the problem.\n>> Especially since all major search engines still seem to return 9.4 (!)\n>> documentation as the first hit for a \"pg_stat_statements\" search. The\n>> common case (installing pg_stat_statements but not tweaking query id\n>> generation) should be simple.\n>\n>\n> the v2 patch I sent should address both your requirements.\n\nYes, thanks--I just tried it and this is great. I just wanted to argue\nagainst reversing course here.\n\n\n", "msg_date": "Wed, 12 May 2021 22:31:01 -0700", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Wed, May 12, 2021 at 10:31:01PM -0700, Maciek Sakrejda wrote:\n> On Wed, May 12, 2021 at 9:58 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > Le jeu. 13 mai 2021 � 12:52, Maciek Sakrejda <m.sakrejda@gmail.com> a �crit :\n> >>\n> >> For what it's worth, I don't think the actual changing of an extra\n> >> setting is that big a burden: it's the figuring out that you need to\n> >> change it, and how you should configure it, that is the problem.\n> >> Especially since all major search engines still seem to return 9.4 (!)\n> >> documentation as the first hit for a \"pg_stat_statements\" search. The\n> >> common case (installing pg_stat_statements but not tweaking query id\n> >> generation) should be simple.\n> >\n> >\n> > the v2 patch I sent should address both your requirements.\n> \n> Yes, thanks--I just tried it and this is great. I just wanted to argue\n> against reversing course here.\n\nOh ok. Good news then, thanks!\n\n\n", "msg_date": "Thu, 13 May 2021 13:46:02 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "At Thu, 13 May 2021 12:33:47 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> Le jeu. 13 mai 2021 à 12:26, Kyotaro Horiguchi <horikyota.ntt@gmail.com> a\n> écrit :\n> \n> > At Thu, 13 May 2021 12:11:12 +0900 (JST), Kyotaro Horiguchi <\n> > horikyota.ntt@gmail.com> wrote in\n> > pg_stat_statemnts defines its own query-id provider function in\n> > pg_stat_statements which calls in-core DefaultJumbeQuery (end emits a\n> > log line).\n> >\n> > If server started with query_id_provider='auto' and pg_stat_statements\n> > is not loaded, pg_stat_activity.query_id is null.\n> >\n> > If query_id_provider='auto' and pg_stat_statements is loaded,\n> > pg_stat_activity.query_id is filled in with a query id.\n> >\n> > If query_id_provider='default' or 'pg_stat_statements' and\n> > pg_stat_statements is not loaded, pg_stat_activity.query_id is filled\n> > in with a query id.\n> >\n> > If query_id_provider='none' and pg_stat_statements is loaded, it\n> > complains as \"query id provider is not available\" and refuss to start.\n> >\n> > If showing the variable, it shows the real provider name instead of\n> > the setting in postgresql.conf.\n> >\n> \n> what if you want to have some other extensions like pg_stat_kcache or\n> pg_store_plans that need a query_id but don't really care if\n> pg_stat_statements is enabled or not? should they all declare their own\n\nThanks for looking it.\n\nThe addtional provider function in pg_stat_statements is just an\nexample to show what if it needs its own query-id provider, which is\nuseless in reality. In reality pg_stat_statements just calls\n\"queryIdWanted(\"default\", true)\" to use any query-id provider and use\nthe in-core one as the fallback implement, and no need to define its\nown one.\n\nAny extension can use the in-core provider and accepting any other\nones by calling queryIdWanted(\"default\", true) then get what they want\nregardless of existence of pg_stat_statements.\n\n> wrapper too? should the system complain or silently ignore when they all\n> try to install their provider function?\n\nOf course if two extensions require diffrent query-id providers, they\njust cannot coexist (that is, server refuses to start). It is quite\nsane behavior in the standpoint of safety. I think almost all\nquery-id users don't insist on a specific implmentation. (So the\nsecond parameter to queryIdWanted() could be omtted and assumed to be\ntrue.)\n\nreagrds.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 13 May 2021 16:15:30 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 04:15:30PM +0900, Kyotaro Horiguchi wrote:\n> > \n> > what if you want to have some other extensions like pg_stat_kcache or\n> > pg_store_plans that need a query_id but don't really care if\n> > pg_stat_statements is enabled or not? should they all declare their own\n> \n> Thanks for looking it.\n> \n> The addtional provider function in pg_stat_statements is just an\n> example to show what if it needs its own query-id provider, which is\n> useless in reality. In reality pg_stat_statements just calls\n> \"queryIdWanted(\"default\", true)\" to use any query-id provider and use\n> the in-core one as the fallback implement, and no need to define its\n> own one.\n> \n> Any extension can use the in-core provider and accepting any other\n> ones by calling queryIdWanted(\"default\", true) then get what they want\n> regardless of existence of pg_stat_statements.\n\nI see, thanks for the clarification. So I looked a bit at the implementation,\nmostly the new queryIdWanted() and check_query_id_provider(), it seems a bit\ninconsistent.\n\nIt's not clear to me how this should be used. It seems that it's designed to\nallow any plugin to activate a query_id implementation, but if a third-party\nquery_id provider tries to activate its own implementation it will fail if you\nalso want to use pg_stat_statements as both will try to activate incompatible\nimplementations. It seems to me that queryIdWanted() should only be used for\nenabling core query_id if the configuration allows the core implementation to\nbe enabled, and everything else should be manually configured by users, so\nthere shouldn't be a provider_name.\n\n\n", "msg_date": "Thu, 13 May 2021 15:42:20 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 12:03:42PM +0800, Julien Rouhaud wrote:\n> On Wed, May 12, 2021 at 11:33:32PM -0400, Bruce Momjian wrote:\n> I don't know what to say. So here is a summary of the complaints that I'm\n> aware of:\n> \n> - https://www.postgresql.org/message-id/1953aec168224b95b0c962a622bef0794da6ff40.camel@moonset.ru\n> That was only a couple of days after the commit just before the feature freeze,\n> so it may be the less relevant complaint.\n> \n> - https://www.postgresql.org/message-id/CAOxo6XJEYunL71g0yD-zRzNRRqBG0Ssw-ARygy5pGRdSjK5YLQ%40mail.gmail.com\n> Did a git bisect to find the commit that changed the behavior and somehow\n> didn't notice the new setting\n> \n> - this thread, with Fuji-san saying:\n> \n> > I'm afraid that users may easily forget to enable compute_query_id when using\n> > pg_stat_statements (because this setting was not necessary in v13 or before)\n> \n> - this thread, with Peter E. saying:\n> \n> > Now there is the additional burden of turning on this weird setting that\n> > no one understands. That's a 50% increase in burden. And almost no one will\n> > want to use a nondefault setting. pg_stat_statements is pretty popular. I\n> > think leaving in this requirement will lead to widespread confusion and\n> > complaints.\n> \n> - this thread, with Pavel saying:\n> \n> > Until now, the pg_stat_statements was zero-config. So the change is not user\n> > friendly.\n> \n> So it's a mix of \"it's changing something that didn't change in a long time\"\n> and \"it's adding extra footgun and/or burden as it's not doing by default what\n> the majority of users will want\", with an overwhelming majority of people\n> supporting the \"we don't want that extra burden\".\n\nWell, now that we have clear warnings when it is misconfigured,\nespecially when querying the pg_stat_statements view, are these\ncomplaints still valid? Also, how is modifying\nshared_preload_libraries zero-config, but modifying\nshared_preload_libraries and compute_query_id a huge burden?\n\nI am personally not comfortable committing a patch to add an auto option\nthe way it is implemented, so another committer will need to take\nownership of this, or the entire feature can be removed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 13 May 2021 10:41:43 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 7:42 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Thu, May 13, 2021 at 12:03:42PM +0800, Julien Rouhaud wrote:\n> > On Wed, May 12, 2021 at 11:33:32PM -0400, Bruce Momjian wrote:\n> > I don't know what to say. So here is a summary of the complaints that I'm\n> > aware of:\n> >\n> > - https://www.postgresql.org/message-id/1953aec168224b95b0c962a622bef0794da6ff40.camel@moonset.ru\n> > That was only a couple of days after the commit just before the feature freeze,\n> > so it may be the less relevant complaint.\n> >\n> > - https://www.postgresql.org/message-id/CAOxo6XJEYunL71g0yD-zRzNRRqBG0Ssw-ARygy5pGRdSjK5YLQ%40mail.gmail.com\n> > Did a git bisect to find the commit that changed the behavior and somehow\n> > didn't notice the new setting\n> >\n> > - this thread, with Fuji-san saying:\n> >\n> > > I'm afraid that users may easily forget to enable compute_query_id when using\n> > > pg_stat_statements (because this setting was not necessary in v13 or before)\n> >\n> > - this thread, with Peter E. saying:\n> >\n> > > Now there is the additional burden of turning on this weird setting that\n> > > no one understands. That's a 50% increase in burden. And almost no one will\n> > > want to use a nondefault setting. pg_stat_statements is pretty popular. I\n> > > think leaving in this requirement will lead to widespread confusion and\n> > > complaints.\n> >\n> > - this thread, with Pavel saying:\n> >\n> > > Until now, the pg_stat_statements was zero-config. So the change is not user\n> > > friendly.\n> >\n> > So it's a mix of \"it's changing something that didn't change in a long time\"\n> > and \"it's adding extra footgun and/or burden as it's not doing by default what\n> > the majority of users will want\", with an overwhelming majority of people\n> > supporting the \"we don't want that extra burden\".\n>\n> Well, now that we have clear warnings when it is misconfigured,\n> especially when querying the pg_stat_statements view, are these\n> complaints still valid? Also, how is modifying\n> shared_preload_libraries zero-config, but modifying\n> shared_preload_libraries and compute_query_id a huge burden?\n\nThe warning makes it clear that there's something wrong, but not how\nto fix it (as I noted in another message in this thread, a web search\nfor pg_stat_statements docs still leads to an obsolete version). I\ndon't think anyone is arguing that this is insurmountable for all\nusers, but it is additional friction, and every bit of friction makes\nPostgres harder to use. Users don't read documentation, or misread\ndocumentation, or just are not sure what the documentation or the\nwarning is telling them, in spite of our best efforts.\n\nAnd you're right, modifying shared_preload_libraries is not\nzero-config--I would love it if we could drop that requirement ;).\nStill, adding another setting is clearly one more thing to get wrong.\n\n> I am personally not comfortable committing a patch to add an auto option\n> the way it is implemented, so another committer will need to take\n> ownership of this, or the entire feature can be removed.\n\nAssuming we do want to avoid additional configuration requirements for\npg_stat_statements, is there another mechanism you feel would work\nbetter? Or is that constraint incompatible with sane behavior for this\nfeature?\n\n\n", "msg_date": "Thu, 13 May 2021 08:32:50 -0700", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 10:41:43AM -0400, Bruce Momjian wrote:\n> \n> Well, now that we have clear warnings when it is misconfigured,\n> especially when querying the pg_stat_statements view, are these\n> complaints still valid?\n\nI'm personally fine with it, and I can send a new version with just the\nwarning when calling pg_stat_statements() or one of the view(s). Or was there\nother warnings that you were referring too?\n\n> I am personally not comfortable committing a patch to add an auto option\n> the way it is implemented, so another committer will need to take\n> ownership of this, or the entire feature can be removed.\n\nThat's fair. Just to be clear, I'm assuming that you also don't like\nHorigushi-san approach either?\n\n\n", "msg_date": "Thu, 13 May 2021 23:35:13 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 08:32:50AM -0700, Maciek Sakrejda wrote:\n> \n> The warning makes it clear that there's something wrong, but not how\n> to fix it\n\nI'm confused, are we talking about the new warning in v2 as suggested by Pavel?\nAs it should make things quite clear:\n\n+SELECT count(*) FROM pg_stat_statements;\n+WARNING: Query identifier calculation seems to be disabled\n+HINT: If you don't want to use a third-party module to compute query identifiers, you may want to enable compute_query_id\n\nThe wording can of course be improved.\n\n\n", "msg_date": "Thu, 13 May 2021 23:38:48 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 08:32:50AM -0700, Maciek Sakrejda wrote:\n> > Well, now that we have clear warnings when it is misconfigured,\n> > especially when querying the pg_stat_statements view, are these\n> > complaints still valid? Also, how is modifying\n> > shared_preload_libraries zero-config, but modifying\n> > shared_preload_libraries and compute_query_id a huge burden?\n> \n> The warning makes it clear that there's something wrong, but not how\n> to fix it (as I noted in another message in this thread, a web search\n> for pg_stat_statements docs still leads to an obsolete version). I\n> don't think anyone is arguing that this is insurmountable for all\n> users, but it is additional friction, and every bit of friction makes\n> Postgres harder to use. Users don't read documentation, or misread\n> documentation, or just are not sure what the documentation or the\n> warning is telling them, in spite of our best efforts.\n\nWell, then let's have the error message tell them what is wrong and how\nto fix it. My issue is that 'auto' spreads confusion around the entire\nAPI, as you can see from the discussion in this thread.\n\n> And you're right, modifying shared_preload_libraries is not\n> zero-config--I would love it if we could drop that requirement ;).\n> Still, adding another setting is clearly one more thing to get wrong.\n> \n> > I am personally not comfortable committing a patch to add an auto option\n> > the way it is implemented, so another committer will need to take\n> > ownership of this, or the entire feature can be removed.\n> \n> Assuming we do want to avoid additional configuration requirements for\n> pg_stat_statements, is there another mechanism you feel would work\n> better? Or is that constraint incompatible with sane behavior for this\n> feature?\n\nI think we just need to leave it is on/off, and then help people find\nthe way to fix it if the misconfigure it, which I think is already been\nshown to be possible.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 13 May 2021 11:51:43 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 11:35:13PM +0800, Julien Rouhaud wrote:\n> On Thu, May 13, 2021 at 10:41:43AM -0400, Bruce Momjian wrote:\n> > \n> > Well, now that we have clear warnings when it is misconfigured,\n> > especially when querying the pg_stat_statements view, are these\n> > complaints still valid?\n> \n> I'm personally fine with it, and I can send a new version with just the\n> warning when calling pg_stat_statements() or one of the view(s). Or was there\n> other warnings that you were referring too?\n\nNo, that was the big fix that made misconfiguration very clear to users\nwho didn't see the change before.\n\n> > I am personally not comfortable committing a patch to add an auto option\n> > the way it is implemented, so another committer will need to take\n> > ownership of this, or the entire feature can be removed.\n> \n> That's fair. Just to be clear, I'm assuming that you also don't like\n> Horigushi-san approach either?\n\nUh, anything with 'auto', I don't like. I am afraid if I commit it, I\nwould feel responsible for the long tail of confusion this will cause\nusers, which is why I was saying I would rather remove it than be\nresponsible for causing such confusion.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 13 May 2021 12:02:15 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 8:38 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Thu, May 13, 2021 at 08:32:50AM -0700, Maciek Sakrejda wrote:\n> >\n> > The warning makes it clear that there's something wrong, but not how\n> > to fix it\n>\n> I'm confused, are we talking about the new warning in v2 as suggested by Pavel?\n> As it should make things quite clear:\n>\n> +SELECT count(*) FROM pg_stat_statements;\n> +WARNING: Query identifier calculation seems to be disabled\n> +HINT: If you don't want to use a third-party module to compute query identifiers, you may want to enable compute_query_id\n>\n> The wording can of course be improved.\n\nI meant that no warning can be as clear as things just working, but I\ndo have feedback on the specific message here:\n\n * \"seems to\" be disabled? Is it? Any reason not to be more definitive here?\n * On reading the beginning of the hint, I can see users asking\nthemselves, \"Do I want to use a third-party module to compute query\nidentifiers?\"\n * \"may want to enable\"? Are there any situations where I don't want\nto use a third-party module *and* I don't want to enable\ncompute_query_id?\n\nSo maybe something like\n\n> +SELECT count(*) FROM pg_stat_statements;\n> +WARNING: Query identifier calculation is disabled\n> +HINT: You must enable compute_query_id or configure a third-party module to compute query identifiers in order to use pg_stat_statements.\n\n(I admit \"configure a third-party module\" is pretty vague, but I think\nthat suggests it's only an option to consider if you know what you're\ndoing.)\n\nAlso, if you're configuring this for usage with a tool like pganalyze,\nand neglect to run a manual query (we guide users to do that, but they\nmay skip that step), the warnings may not even be visible (the Go\ndriver we are using does not surface client warnings). Should this be\nan error instead of a warning? Is it ever useful to get an empty\nresult set from querying pg_stat_statements? Using an error here would\nparallel the behavior of shared_preload_libraries not including\npg_stat_statements.\n\n\n", "msg_date": "Thu, 13 May 2021 09:30:55 -0700", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 09:30:55AM -0700, Maciek Sakrejda wrote:\n> On Thu, May 13, 2021 at 8:38 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Thu, May 13, 2021 at 08:32:50AM -0700, Maciek Sakrejda wrote:\n> > >\n> > > The warning makes it clear that there's something wrong, but not how\n> > > to fix it\n> >\n> > I'm confused, are we talking about the new warning in v2 as suggested by Pavel?\n> > As it should make things quite clear:\n> >\n> > +SELECT count(*) FROM pg_stat_statements;\n> > +WARNING: Query identifier calculation seems to be disabled\n> > +HINT: If you don't want to use a third-party module to compute query identifiers, you may want to enable compute_query_id\n> >\n> > The wording can of course be improved.\n> \n> I meant that no warning can be as clear as things just working, but I\n> do have feedback on the specific message here:\n> \n> * \"seems to\" be disabled? Is it? Any reason not to be more definitive here?\n> * On reading the beginning of the hint, I can see users asking\n> themselves, \"Do I want to use a third-party module to compute query\n> identifiers?\"\n> * \"may want to enable\"? Are there any situations where I don't want\n> to use a third-party module *and* I don't want to enable\n> compute_query_id?\n> \n> So maybe something like\n> \n> > +SELECT count(*) FROM pg_stat_statements;\n> > +WARNING: Query identifier calculation is disabled\n> > +HINT: You must enable compute_query_id or configure a third-party module to compute query identifiers in order to use pg_stat_statements.\n\nYes, I like this. The reason the old message was so vague is that\n'auto', the default some people wanted, didn't issue that error, only\n'off' did, so there was an assumption you wanted a custom module since\nyou changed the value to off. If we are going with just on/off, no\nauto, the message you suggest, leading with compute_query_id, is the\nright approach.\n\n> (I admit \"configure a third-party module\" is pretty vague, but I think\n> that suggests it's only an option to consider if you know what you're\n> doing.)\n\nSeems fine to me.\n\n> Also, if you're configuring this for usage with a tool like pganalyze,\n> and neglect to run a manual query (we guide users to do that, but they\n> may skip that step), the warnings may not even be visible (the Go\n> driver we are using does not surface client warnings). Should this be\n> an error instead of a warning? Is it ever useful to get an empty\n> result set from querying pg_stat_statements? Using an error here would\n> parallel the behavior of shared_preload_libraries not including\n> pg_stat_statements.\n\nGood question.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 13 May 2021 12:39:38 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "\nOn 5/13/21 12:18 AM, Fujii Masao wrote:\n>\n>\n>\n> If we do this, compute_query_id=auto seems to be similar to\n> huge_pages=try.\n> When huge_pages=try, whether huge pages is actually used is defined\n> depending\n> outside PostgreSQL (i.e, OS setting in this case). Neither\n> pg_settings.setting nor\n> souce are not changed in that case.\n>\n>\n\nNot a bad analogy, showing there's some precedent for this sort of thing.\n\n\nThe only thing that bugs me is that we're pretty damn late in the\nprocess to be engaging in this amount of design.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 13 May 2021 12:45:07 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 12:45:07PM -0400, Andrew Dunstan wrote:\n> \n> On 5/13/21 12:18 AM, Fujii Masao wrote:\n> >\n> >\n> >\n> > If we do this, compute_query_id=auto seems to be similar to\n> > huge_pages=try.\n> > When huge_pages=try, whether huge pages is actually used is defined\n> > depending\n> > outside PostgreSQL (i.e, OS setting in this case). Neither\n> > pg_settings.setting nor\n> > souce are not changed in that case.\n> >\n> >\n> \n> Not a bad analogy, showing there's some precedent for this sort of thing.\n> \n> \n> The only thing that bugs me is that we're pretty damn late in the\n> process to be engaging in this amount of design.\n\nThe issue is that there is no external way to check what query id\ncomputation is being used, unlike huge pages, which can be queried from\nthe operating system. I also agree it is late, and discussion of auto\ncontinues to show cases where this makes later improvements more\ncomplex.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 13 May 2021 13:05:38 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> The only thing that bugs me is that we're pretty damn late in the\n> process to be engaging in this amount of design.\n\nIndeed. I feel that this feature was forced in before it was really\nready.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 May 2021 13:17:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 01:17:16PM -0400, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > The only thing that bugs me is that we're pretty damn late in the\n> > process to be engaging in this amount of design.\n> \n> Indeed. I feel that this feature was forced in before it was really\n> ready.\n\nThe user API has always been a challenge for this feature but I thought\nwe had it ironed out. What I didn't anticipate were the configuration\ncomplaints, and if those are valid, the feature should be removed since\nwe can't improve it at this point, nor do I have any idea if that is\neven possible without unacceptable negatives. If the configuration\ncomplaints are invalid, what we have now is very good, I think, though\nadding more warnings about misconfiguration would be wise.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 13 May 2021 13:33:23 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > The only thing that bugs me is that we're pretty damn late in the\n> > process to be engaging in this amount of design.\n> \n> Indeed. I feel that this feature was forced in before it was really\n> ready.\n\nI'm coming around to have a similar feeling. While having an\nalternative query ID might be useful, I have a hard time seeing it as\nlikely to be a hugely popular feature that is worth a lot of users\ncomplaining (as we've seen already, multiple times, before even getting\nto beta...) that things aren't working anymore. That we can't figure\nout which libraries to load automatically based on what extensions have\nbeen installed and therefore make everyone have to change\nshared_preload_libraries isn't a good thing and requiring additional\nconfiguration for extremely common extensions like pg_stat_statements is\nmaking it worse.\n\nThanks,\n\nStephen", "msg_date": "Thu, 13 May 2021 13:33:27 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Re: Bruce Momjian\n> Well, now that we have clear warnings when it is misconfigured,\n> especially when querying the pg_stat_statements view, are these\n> complaints still valid? Also, how is modifying\n> shared_preload_libraries zero-config, but modifying\n> shared_preload_libraries and compute_query_id a huge burden?\n\nIt's zero-config in the sense that if you want to have\npg_stat_statements, loading that module via shared_preload_libraries\nis just natural.\n\nHaving to set compute_query_id isn't natural. It's a setting with a\ncompletely different name, and the connection of pg_stat_statements to\ncompute_query_id isn't obvious.\n\nThe reasoning with \"we have warnings and stuff\" might be ok if\npg_stat_statements were a new thing, but it has worked via\nshared_preload_libraries only for the last decade, and requiring\nsomething extra will confuse probably every single user of\npg_stat_statements out there.\n\nPerhaps worse, note that these warnings will likely first be seen by\nthe end users of databases, not by the admin performing the initial\nsetup or upgrade, who will not be able to fix the problem themselves.\n\nChristoph\n\n\n", "msg_date": "Thu, 13 May 2021 19:39:45 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 01:33:27PM -0400, Stephen Frost wrote:\n> I'm coming around to have a similar feeling. While having an\n> alternative query ID might be useful, I have a hard time seeing it as\n> likely to be a hugely popular feature that is worth a lot of users\n> complaining (as we've seen already, multiple times, before even getting\n> to beta...) that things aren't working anymore. That we can't figure\n> out which libraries to load automatically based on what extensions have\n> been installed and therefore make everyone have to change\n> shared_preload_libraries isn't a good thing and requiring additional\n> configuration for extremely common extensions like pg_stat_statements is\n> making it worse.\n\nWould someone please explain what is wrong with what is in the tree\nnow, except that it needs additional warnings about misconfiguration? \nRequiring two postgresql.conf changes instead of one doesn't seem like a\nvalid complaint to me, especially if the warnings are in place and the\nrelease notes mention it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 13 May 2021 13:41:15 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Thu, May 13, 2021 at 01:33:27PM -0400, Stephen Frost wrote:\n> > I'm coming around to have a similar feeling. While having an\n> > alternative query ID might be useful, I have a hard time seeing it as\n> > likely to be a hugely popular feature that is worth a lot of users\n> > complaining (as we've seen already, multiple times, before even getting\n> > to beta...) that things aren't working anymore. That we can't figure\n> > out which libraries to load automatically based on what extensions have\n> > been installed and therefore make everyone have to change\n> > shared_preload_libraries isn't a good thing and requiring additional\n> > configuration for extremely common extensions like pg_stat_statements is\n> > making it worse.\n> \n> Would someone please explain what is wrong with what is in the tree\n> now, except that it needs additional warnings about misconfiguration? \n> Requiring two postgresql.conf changes instead of one doesn't seem like a\n> valid complaint to me, especially if the warnings are in place and the\n> release notes mention it.\n\nWill you be updating pg_upgrade to detect and throw a warning during\ncheck in the event that it discovers a broken config?\n\nIf not, then I don't think you're correct in arguing that this need for\nadditional configuration isn't a valid complaint.\n\nThanks,\n\nStephen", "msg_date": "Thu, 13 May 2021 13:51:07 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 01:51:07PM -0400, Stephen Frost wrote:\n> Greetings,\n> \n> * Bruce Momjian (bruce@momjian.us) wrote:\n> > On Thu, May 13, 2021 at 01:33:27PM -0400, Stephen Frost wrote:\n> > > I'm coming around to have a similar feeling. While having an\n> > > alternative query ID might be useful, I have a hard time seeing it as\n> > > likely to be a hugely popular feature that is worth a lot of users\n> > > complaining (as we've seen already, multiple times, before even getting\n> > > to beta...) that things aren't working anymore. That we can't figure\n> > > out which libraries to load automatically based on what extensions have\n> > > been installed and therefore make everyone have to change\n> > > shared_preload_libraries isn't a good thing and requiring additional\n> > > configuration for extremely common extensions like pg_stat_statements is\n> > > making it worse.\n> > \n> > Would someone please explain what is wrong with what is in the tree\n> > now, except that it needs additional warnings about misconfiguration? \n> > Requiring two postgresql.conf changes instead of one doesn't seem like a\n> > valid complaint to me, especially if the warnings are in place and the\n> > release notes mention it.\n> \n> Will you be updating pg_upgrade to detect and throw a warning during\n> check in the event that it discovers a broken config?\n\nUh, how does this relate to pg_upgrade? Are you saying someone\nmisconfigures the new system with pg_stat_statements but not query id? \nThe server would still start and upgrade, no? How is this different\nfrom any other GUC we rename? I am not following much of the logic in\nthis discussion, frankly.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 13 May 2021 13:54:55 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 07:39:45PM +0200, Christoph Berg wrote:\n> Re: Bruce Momjian\n> > Well, now that we have clear warnings when it is misconfigured,\n> > especially when querying the pg_stat_statements view, are these\n> > complaints still valid? Also, how is modifying\n> > shared_preload_libraries zero-config, but modifying\n> > shared_preload_libraries and compute_query_id a huge burden?\n> \n> It's zero-config in the sense that if you want to have\n> pg_stat_statements, loading that module via shared_preload_libraries\n> is just natural.\n> \n> Having to set compute_query_id isn't natural. It's a setting with a\n> completely different name, and the connection of pg_stat_statements to\n> compute_query_id isn't obvious.\n> \n> The reasoning with \"we have warnings and stuff\" might be ok if\n> pg_stat_statements were a new thing, but it has worked via\n> shared_preload_libraries only for the last decade, and requiring\n> something extra will confuse probably every single user of\n> pg_stat_statements out there.\n> \n> Perhaps worse, note that these warnings will likely first be seen by\n> the end users of databases, not by the admin performing the initial\n> setup or upgrade, who will not be able to fix the problem themselves.\n\nWell, but doing this extra configuration, the query id shows up in a lot\nmore places. I can't imagine how this could be done cleanly without\nrequiring extra configuration, unless the query_id computation was\ncheaper to compute and we could enable it by default.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 13 May 2021 13:59:47 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Thu, May 13, 2021 at 07:39:45PM +0200, Christoph Berg wrote:\n> > Re: Bruce Momjian\n> > > Well, now that we have clear warnings when it is misconfigured,\n> > > especially when querying the pg_stat_statements view, are these\n> > > complaints still valid? Also, how is modifying\n> > > shared_preload_libraries zero-config, but modifying\n> > > shared_preload_libraries and compute_query_id a huge burden?\n> > \n> > It's zero-config in the sense that if you want to have\n> > pg_stat_statements, loading that module via shared_preload_libraries\n> > is just natural.\n\nNot sure about natural but it's certainly what folks have at least\nbecome used to. We should be working to eliminate it though.\n\n> > Having to set compute_query_id isn't natural. It's a setting with a\n> > completely different name, and the connection of pg_stat_statements to\n> > compute_query_id isn't obvious.\n> > \n> > The reasoning with \"we have warnings and stuff\" might be ok if\n> > pg_stat_statements were a new thing, but it has worked via\n> > shared_preload_libraries only for the last decade, and requiring\n> > something extra will confuse probably every single user of\n> > pg_stat_statements out there.\n\nAs we keep seeing, over and over. The ongoing comments claiming that\nit's \"just\" a minor additional configuration tweak fall pretty flat when\nyou consider the number of times it's already been brought up, and who\nit has been brought up by.\n\n> > Perhaps worse, note that these warnings will likely first be seen by\n> > the end users of databases, not by the admin performing the initial\n> > setup or upgrade, who will not be able to fix the problem themselves.\n\nI don't think this is appreciated anywhere near well enough. This takes\nexisting perfectly working configurations and actively breaks them in a\nmanner that isn't obvious and isn't something that an admin would have\nany idea about until after they've upgraded and then started trying to\nquery the view. That's pretty terrible.\n\n> Well, but doing this extra configuration, the query id shows up in a lot\n> more places. I can't imagine how this could be done cleanly without\n> requiring extra configuration, unless the query_id computation was\n> cheaper to compute and we could enable it by default.\n\nThere's a ridiculously simple option here which is: drop the idea that\nwe support an extension redefining the query id and then just make it\non/off with the default to be 'on'. If people actually have a problem\nwith it being on and they don't want to use pg_stat_statements then they\ncan turn it off. This won't break any existing configs that are out\nthere in the field and avoids the complexity of having some kind of\n'auto' config. I do agree with the general idea of wanting to be\nextensible but I'm not convinced that, in this particular case, it's\nworth all of this. I'm somewhat convinced that having a way to disable\nthe query id is useful in limited cases and if people want a way to do\nthat, then we can give that to them in a straightfoward way that doens't\nbreak things.\n\nThanks,\n\nStephen", "msg_date": "Thu, 13 May 2021 14:07:06 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> There's a ridiculously simple option here which is: drop the idea that\n> we support an extension redefining the query id and then just make it\n> on/off with the default to be 'on'.\n\nI do not think that defaulting it to 'on' is acceptable unless you can\nshow that the added overhead is negligible. IIUC the measurements that\nhave been done show the opposite.\n\nMaybe we should revert this thing pending somebody doing the work to\nmake a version of queryid labeling that actually is negligibly cheap.\nIt certainly seems like that could be done; one more traversal of the\nparse tree can't be that expensive in itself. I suspect that the\nperformance problem is with the particular hashing mechanism that\nwas used, which looks mighty ad-hoc anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 May 2021 14:29:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > There's a ridiculously simple option here which is: drop the idea that\n> > we support an extension redefining the query id and then just make it\n> > on/off with the default to be 'on'.\n> \n> I do not think that defaulting it to 'on' is acceptable unless you can\n> show that the added overhead is negligible. IIUC the measurements that\n> have been done show the opposite.\n\nAh, right, it had only been done before when pg_stat_statements was\nloaded.. In which case, it seems like we should:\n\na) go back to that\n\nb) if someone wants an alternative query ID, tell them to add it to\n pg_stat_statements and make it configurable *there*\n\nc) Have pg_stat_statements provide another function/view/etc that folks\n can use to get a queryid for an ongoing query ..?\n\nd) Maybe come up with a way for extensions, generically, to make a value\n available to log_line_prefix? That could be pretty interesting.\n\nOr just accept that this is a bit hokey with the 'auto' approach. I get\nBruce has concerns about it but I'm not convinced that it's actually all\nthat bad to go with that.\n\nThanks,\n\nStephen", "msg_date": "Thu, 13 May 2021 14:47:23 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On 2021-May-13, Stephen Frost wrote:\n\n> Or just accept that this is a bit hokey with the 'auto' approach. I get\n> Bruce has concerns about it but I'm not convinced that it's actually all\n> that bad to go with that.\n\nYeah, I think the alleged confusion there is overstated.\n\nI'm happy to act as committer for that if he wants to step away from it.\nI'm already used to being lapidated at every corner anyway.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"E pur si muove\" (Galileo Galilei)\n\n\n", "msg_date": "Thu, 13 May 2021 15:04:30 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 02:29:09PM -0400, Tom Lane wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > There's a ridiculously simple option here which is: drop the idea that\n> > we support an extension redefining the query id and then just make it\n> > on/off with the default to be 'on'.\n> \n> I do not think that defaulting it to 'on' is acceptable unless you can\n> show that the added overhead is negligible. IIUC the measurements that\n> have been done show the opposite.\n\nAgreed.\n\n> Maybe we should revert this thing pending somebody doing the work to\n> make a version of queryid labeling that actually is negligibly cheap.\n> It certainly seems like that could be done; one more traversal of the\n> parse tree can't be that expensive in itself. I suspect that the\n> performance problem is with the particular hashing mechanism that\n> was used, which looks mighty ad-hoc anyway.\n\nI was surprised it was ~2%.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 13 May 2021 15:11:59 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 03:04:30PM -0400, �lvaro Herrera wrote:\n> On 2021-May-13, Stephen Frost wrote:\n> \n> > Or just accept that this is a bit hokey with the 'auto' approach. I get\n> > Bruce has concerns about it but I'm not convinced that it's actually all\n> > that bad to go with that.\n> \n> Yeah, I think the alleged confusion there is overstated.\n> \n> I'm happy to act as committer for that if he wants to step away from it.\n> I'm already used to being lapidated at every corner anyway.\n\nOK, feel free to take ownership of it, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 13 May 2021 15:13:23 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "\nOn 5/13/21 3:04 PM, Alvaro Herrera wrote:\n> On 2021-May-13, Stephen Frost wrote:\n>\n>> Or just accept that this is a bit hokey with the 'auto' approach. I get\n>> Bruce has concerns about it but I'm not convinced that it's actually all\n>> that bad to go with that.\n> Yeah, I think the alleged confusion there is overstated.\n>\n> I'm happy to act as committer for that if he wants to step away from it.\n> I'm already used to being lapidated at every corner anyway.\n>\n\n\nMany thanks Alvaro, among other things for teaching me a new word.\n\n\ncheers\n\n\n(delapidated) andrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 13 May 2021 19:19:11 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 07:19:11PM -0400, Andrew Dunstan wrote:\n> \n> On 5/13/21 3:04 PM, Alvaro Herrera wrote:\n> > On 2021-May-13, Stephen Frost wrote:\n> >\n> >> Or just accept that this is a bit hokey with the 'auto' approach. I get\n> >> Bruce has concerns about it but I'm not convinced that it's actually all\n> >> that bad to go with that.\n> > Yeah, I think the alleged confusion there is overstated.\n> >\n> > I'm happy to act as committer for that if he wants to step away from it.\n> > I'm already used to being lapidated at every corner anyway.\n> >\n> \n> \n> Many thanks Alvaro, among other things for teaching me a new word.\n> \n> (delapidated) andrew\n\nYes, I had to look it up too. :-)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 13 May 2021 19:22:58 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 07:19:11PM -0400, Andrew Dunstan wrote:\n> On 5/13/21 3:04 PM, Alvaro Herrera wrote:\n>> I'm happy to act as committer for that if he wants to step away from it.\n>> I'm already used to being lapidated at every corner anyway.\n> \n> Many thanks Alvaro, among other things for teaching me a new word.\n\n+1. Thanks, Alvaro.\n--\nMichael", "msg_date": "Fri, 14 May 2021 08:56:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Here's a first attempt at what was suggested. If you say \"auto\" it\nremains auto in SHOW, but it gets enabled if a module asks for it.\n\nNot final yet, but I thought I'd throw it out for early commentary ...\n\n-- \n�lvaro Herrera Valdivia, Chile", "msg_date": "Thu, 13 May 2021 20:04:37 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 08:04:37PM -0400, �lvaro Herrera wrote:\n> Here's a first attempt at what was suggested. If you say \"auto\" it\n> remains auto in SHOW, but it gets enabled if a module asks for it.\n> \n> Not final yet, but I thought I'd throw it out for early commentary ...\n\nI certainly like this idea better than having the extension change the\noutput of the GUC.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 13 May 2021 20:13:02 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Fri, May 14, 2021 at 3:12 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> > Maybe we should revert this thing pending somebody doing the work to\n> > make a version of queryid labeling that actually is negligibly cheap.\n> > It certainly seems like that could be done; one more traversal of the\n> > parse tree can't be that expensive in itself. I suspect that the\n> > performance problem is with the particular hashing mechanism that\n> > was used, which looks mighty ad-hoc anyway.\n>\n> I was surprised it was ~2%.\n\nJust to be clear, the 2% was a worst case scenario, ie. a very fast\nread-only query on small data returning a single row. As soon as you\nget something more realistic / expensive the overhead goes away. For\nreference here is the detail:\nhttps://www.postgresql.org/message-id/CAOBaU_ZVmGPfKTwZ6cM_qdzaF2E1gMkrLDMwwLy4Z1JxQ6=CZg@mail.gmail.com\n\n\n", "msg_date": "Fri, 14 May 2021 09:36:19 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Fri, May 14, 2021 at 8:13 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Thu, May 13, 2021 at 08:04:37PM -0400, Álvaro Herrera wrote:\n> > Here's a first attempt at what was suggested. If you say \"auto\" it\n> > remains auto in SHOW, but it gets enabled if a module asks for it.\n> >\n> > Not final yet, but I thought I'd throw it out for early commentary ...\n>\n> I certainly like this idea better than having the extension change the\n> output of the GUC.\n\nOh, I didn't understand that it was the major blocker. I'm fine with it too.\n\n\n", "msg_date": "Fri, 14 May 2021 09:40:15 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Fri, May 14, 2021 at 09:40:15AM +0800, Julien Rouhaud wrote:\n> On Fri, May 14, 2021 at 8:13 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Thu, May 13, 2021 at 08:04:37PM -0400, �lvaro Herrera wrote:\n> > > Here's a first attempt at what was suggested. If you say \"auto\" it\n> > > remains auto in SHOW, but it gets enabled if a module asks for it.\n> > >\n> > > Not final yet, but I thought I'd throw it out for early commentary ...\n> >\n> > I certainly like this idea better than having the extension change the\n> > output of the GUC.\n> \n> Oh, I didn't understand that it was the major blocker. I'm fine with it too.\n\nI think if we keep the output as 'auto', and document that you check\npg_stat_activity for a hash to see if it is enabled, that gets us pretty\nfar.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 13 May 2021 21:41:42 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Fri, May 14, 2021 at 3:12 AM Bruce Momjian <bruce@momjian.us> wrote:\n>> I was surprised it was ~2%.\n\n> Just to be clear, the 2% was a worst case scenario, ie. a very fast\n> read-only query on small data returning a single row. As soon as you\n> get something more realistic / expensive the overhead goes away.\n\nOf course, for plenty of people that IS the realistic scenario that\nthey care about max performance for.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 May 2021 21:47:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 09:47:02PM -0400, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Fri, May 14, 2021 at 3:12 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >> I was surprised it was ~2%.\n> \n> > Just to be clear, the 2% was a worst case scenario, ie. a very fast\n> > read-only query on small data returning a single row. As soon as you\n> > get something more realistic / expensive the overhead goes away.\n> \n> Of course, for plenty of people that IS the realistic scenario that\n> they care about max performance for.\n\nI'm not arguing that the scenario is unrealistic. I'm arguing that retrieving\nthe first row of a join between pg_class and pg_attribute on an otherwise\nvanilla database may not be the most representative workload, especially when\nyou take into account that it was done on hardware that still took 3 ms to do\nthat.\n\nUnfortunately my laptop is pretty old and has already proven multiple time to\ngive unreliable benchmark results, so I'm not confident at all that those 2%\nare even real outside of my machine.\n\n\n", "msg_date": "Fri, 14 May 2021 10:21:59 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "\n\nOn 2021/05/14 9:04, Alvaro Herrera wrote:\n> Here's a first attempt at what was suggested. If you say \"auto\" it\n> remains auto in SHOW, but it gets enabled if a module asks for it.\n> \n> Not final yet, but I thought I'd throw it out for early commentary ...\n\nMany thanks! The patch basically looks good to me.\n\n+void\n+EnableQueryId(void)\n+{\n+\tif (compute_query_id == COMPUTE_QUERY_ID_AUTO)\n+\t\tauto_query_id_enabled = true;\n\nShouldn't EnableQueryId() enable auto_query_id_enabled whatever compute_query_id is?\nOtherwise, for example, the following scenario can happen and it's a bit strange.\n\n1. The server starts up with shared_preload_libraries=pg_stat_statements and compute_query_id=on\n2. compute_query_id is set to auto and the configuration file is reloaded\nThen, even though compute_query_id is auto and pg_stat_statements is loaded,\nquery ids are not computed and no queries are tracked by pg_stat_statements.\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 14 May 2021 12:20:00 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Fri, May 14, 2021 at 12:20:00PM +0900, Fujii Masao wrote:\n> \n> \n> On 2021/05/14 9:04, Alvaro Herrera wrote:\n> > Here's a first attempt at what was suggested. If you say \"auto\" it\n> > remains auto in SHOW, but it gets enabled if a module asks for it.\n> > \n> > Not final yet, but I thought I'd throw it out for early commentary ...\n> \n> Many thanks! The patch basically looks good to me.\n> \n> +void\n> +EnableQueryId(void)\n> +{\n> +\tif (compute_query_id == COMPUTE_QUERY_ID_AUTO)\n> +\t\tauto_query_id_enabled = true;\n> \n> Shouldn't EnableQueryId() enable auto_query_id_enabled whatever compute_query_id is?\n> Otherwise, for example, the following scenario can happen and it's a bit strange.\n> \n> 1. The server starts up with shared_preload_libraries=pg_stat_statements and compute_query_id=on\n> 2. compute_query_id is set to auto and the configuration file is reloaded\n> Then, even though compute_query_id is auto and pg_stat_statements is loaded,\n> query ids are not computed and no queries are tracked by pg_stat_statements.\n\n+1. Note that if you switch from compute_query_id = off + custom\nquery_id + pg_stat_statements to compute_query_id = auto then thing will\nimmediately break (as we instruct third-party plugins authors to error out in\nthat case), which is way better than breaking at the next random service\nrestart.\n\n\n", "msg_date": "Fri, 14 May 2021 11:57:33 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "I wrote:\n> Maybe we should revert this thing pending somebody doing the work to\n> make a version of queryid labeling that actually is negligibly cheap.\n> It certainly seems like that could be done; one more traversal of the\n> parse tree can't be that expensive in itself. I suspect that the\n> performance problem is with the particular hashing mechanism that\n> was used, which looks mighty ad-hoc anyway.\n\nTo put a little bit of meat on that idea, I experimented with jacking\nup the \"jumble\" calculation and driving some other implementations\nunderneath.\n\nI thought that Julien's \"worst case\" scenario was pretty far from\nworst case, since it involved a join which a lot of simple queries\ndon't. I tested this scenario instead:\n\n$ cat naive.sql\nSELECT * FROM pg_class c ORDER BY oid DESC LIMIT 1;\n$ pgbench -n -f naive.sql -T 60 postgres\n\nwhich is still complicated enough that there's work for the\nquery fingerprinter to do, but not so much for planning and\nexecution.\n\nI confirm that on HEAD, there's a noticeable TPS penalty from\nturning on compute_query_id: about 3.2% on my machine.\n\nThe first patch attached replaces the \"jumble\" calculation\nwith two CRC32s (two so that we still get 64 bits out at\nthe end). I see 2.7% penalty with this version. Now,\nI'm using an Intel machine with\n#define USE_SSE42_CRC32C_WITH_RUNTIME_CHECK 1\nso on machines without any hardware CRC support, this'd\nlikely be a loss. But it still proves the point that the\nexisting implementation is just not very speedy.\n\nI then tried a really dumb xor'ing implementation, and\nthat gives me a slowdown of 2.2%. This could undoubtedly\nbe improved on further, say by unrolling the loop or\nprocessing multiple bytes at once. One problem with it\nis that I suspect it will tend to concentrate the entropy\ninto the third/fourth and seventh/eighth bytes of the\naccumulator, since so many of the fields being jumbled\nare 4-byte or 8-byte fields with most of the entropy in\ntheir low-order bits. Probably that could be improved\nwith a bit more thought -- say, an extra bump of the\nnextbyte pointer after each field.\n\nAnyway, I think that what we have here is quite an inferior\nimplementation, and we can do better with some more thought.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 14 May 2021 00:26:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Fri, May 14, 2021 at 12:26:23AM -0400, Tom Lane wrote:\n> I then tried a really dumb xor'ing implementation, and\n> that gives me a slowdown of 2.2%. This could undoubtedly\n> be improved on further, say by unrolling the loop or\n> processing multiple bytes at once. One problem with it\n> is that I suspect it will tend to concentrate the entropy\n> into the third/fourth and seventh/eighth bytes of the\n> accumulator, since so many of the fields being jumbled\n> are 4-byte or 8-byte fields with most of the entropy in\n> their low-order bits. Probably that could be improved\n> with a bit more thought -- say, an extra bump of the\n> nextbyte pointer after each field.\n> \n> Anyway, I think that what we have here is quite an inferior\n> implementation, and we can do better with some more thought.\n\nConsidering what even a simple query has to do, I am still baffled why\nsuch a computation takes ~2%, though it obviously does since you have\nconfirmed that figure.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 14 May 2021 08:09:42 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Thu, May 13, 2021 at 09:41:42PM -0400, Bruce Momjian wrote:\n> On Fri, May 14, 2021 at 09:40:15AM +0800, Julien Rouhaud wrote:\n> > On Fri, May 14, 2021 at 8:13 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > >\n> > > On Thu, May 13, 2021 at 08:04:37PM -0400, �lvaro Herrera wrote:\n> > > > Here's a first attempt at what was suggested. If you say \"auto\" it\n> > > > remains auto in SHOW, but it gets enabled if a module asks for it.\n> > > >\n> > > > Not final yet, but I thought I'd throw it out for early commentary ...\n> > >\n> > > I certainly like this idea better than having the extension change the\n> > > output of the GUC.\n> > \n> > Oh, I didn't understand that it was the major blocker. I'm fine with it too.\n> \n> I think if we keep the output as 'auto', and document that you check\n> pg_stat_activity for a hash to see if it is enabled, that gets us pretty\n> far.\n\nI think keeping the output as 'auto', and documenting that this query\nmust be run to determine if the query id is being computed:\n\n\tSELECT query_id\n\tFROM pg_stat_activity\n\tWHERE pid = pg_backend_pid();\n\nis the right approach.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 14 May 2021 08:35:14 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Fri, May 14, 2021 at 08:35:14AM -0400, Bruce Momjian wrote:\n> On Thu, May 13, 2021 at 09:41:42PM -0400, Bruce Momjian wrote:\n> > I think if we keep the output as 'auto', and document that you check\n> > pg_stat_activity for a hash to see if it is enabled, that gets us pretty\n> > far.\n> \n> I think keeping the output as 'auto', and documenting that this query\n> must be run to determine if the query id is being computed:\n> \n> \tSELECT query_id\n> \tFROM pg_stat_activity\n> \tWHERE pid = pg_backend_pid();\n> \n> is the right approach.\n\nActually, we talked about huge_pages = try needing to use OS commands to\nsee if huge pages are being used, so requiring an SQL query to see if\nquery id is being computed seems reasonable.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 14 May 2021 08:57:41 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Fri, May 14, 2021 at 08:57:41AM -0400, Bruce Momjian wrote:\n> On Fri, May 14, 2021 at 08:35:14AM -0400, Bruce Momjian wrote:\n> > On Thu, May 13, 2021 at 09:41:42PM -0400, Bruce Momjian wrote:\n> > > I think if we keep the output as 'auto', and document that you check\n> > > pg_stat_activity for a hash to see if it is enabled, that gets us pretty\n> > > far.\n> > \n> > I think keeping the output as 'auto', and documenting that this query\n> > must be run to determine if the query id is being computed:\n> > \n> > \tSELECT query_id\n> > \tFROM pg_stat_activity\n> > \tWHERE pid = pg_backend_pid();\n> > \n> > is the right approach.\n> \n> Actually, we talked about huge_pages = try needing to use OS commands to\n> see if huge pages are being used, so requiring an SQL query to see if\n> query id is being computed seems reasonable.\n\nI totally agree.\n\n\n", "msg_date": "Fri, 14 May 2021 22:27:49 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "\n\nSent from my iPhone\n\n> On May 14, 2021, at 8:35 AM, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Thu, May 13, 2021 at 09:41:42PM -0400, Bruce Momjian wrote:\n>>> On Fri, May 14, 2021 at 09:40:15AM +0800, Julien Rouhaud wrote:\n>>> On Fri, May 14, 2021 at 8:13 AM Bruce Momjian <bruce@momjian.us> wrote:\n>>>> \n>>>> On Thu, May 13, 2021 at 08:04:37PM -0400, Álvaro Herrera wrote:\n>>>>> Here's a first attempt at what was suggested. If you say \"auto\" it\n>>>>> remains auto in SHOW, but it gets enabled if a module asks for it.\n>>>>> \n>>>>> Not final yet, but I thought I'd throw it out for early commentary ...\n>>>> \n>>>> I certainly like this idea better than having the extension change the\n>>>> output of the GUC.\n>>> \n>>> Oh, I didn't understand that it was the major blocker. I'm fine with it too.\n>> \n>> I think if we keep the output as 'auto', and document that you check\n>> pg_stat_activity for a hash to see if it is enabled, that gets us pretty\n>> far.\n> \n> I think keeping the output as 'auto', and documenting that this query\n> must be run to determine if the query id is being computed:\n> \n> SELECT query_id\n> FROM pg_stat_activity\n> WHERE pid = pg_backend_pid();\n> \n> is the right approach.\n\nI’d rather we added a specific function. This is not really obvious.\n\nCheers\n\nAndrew\n\n\n\n", "msg_date": "Fri, 14 May 2021 12:04:05 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Fri, May 14, 2021 at 12:04:05PM -0400, Andrew Dunstan wrote:\n> \n> > On May 14, 2021, at 8:35 AM, Bruce Momjian <bruce@momjian.us> wrote:\n> > \n> > On Thu, May 13, 2021 at 09:41:42PM -0400, Bruce Momjian wrote:\n> >>> On Fri, May 14, 2021 at 09:40:15AM +0800, Julien Rouhaud wrote:\n> >>> On Fri, May 14, 2021 at 8:13 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >>>> \n> >>>> On Thu, May 13, 2021 at 08:04:37PM -0400, Álvaro Herrera wrote:\n> >>>>> Here's a first attempt at what was suggested. If you say \"auto\" it\n> >>>>> remains auto in SHOW, but it gets enabled if a module asks for it.\n> >>>>> \n> >>>>> Not final yet, but I thought I'd throw it out for early commentary ...\n> >>>> \n> >>>> I certainly like this idea better than having the extension change the\n> >>>> output of the GUC.\n> >>> \n> >>> Oh, I didn't understand that it was the major blocker. I'm fine with it too.\n> >> \n> >> I think if we keep the output as 'auto', and document that you check\n> >> pg_stat_activity for a hash to see if it is enabled, that gets us pretty\n> >> far.\n> > \n> > I think keeping the output as 'auto', and documenting that this query\n> > must be run to determine if the query id is being computed:\n> > \n> > SELECT query_id\n> > FROM pg_stat_activity\n> > WHERE pid = pg_backend_pid();\n> > \n> > is the right approach.\n> \n> I’d rather we added a specific function. This is not really obvious.\n\nWe could, but I don't know how much this will be used in practice. The only\nway someone would try to know if \"auto\" means that query_id are computed is if\nshe has an extension like pg_stat_statements, and she will probably just check\nthat anyway, and will get a warning if query_id are *not* computed.\n\nThat being said no objection to an SQL wrapper around a query like it.\n\n\n", "msg_date": "Sat, 15 May 2021 00:16:01 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Fri, May 14, 2021 at 12:04:05PM -0400, Andrew Dunstan wrote:\n> > On May 14, 2021, at 8:35 AM, Bruce Momjian <bruce@momjian.us> wrote:\n> > \n> > On Thu, May 13, 2021 at 09:41:42PM -0400, Bruce Momjian wrote:\n> > I think keeping the output as 'auto', and documenting that this query\n> > must be run to determine if the query id is being computed:\n> > \n> > SELECT query_id\n> > FROM pg_stat_activity\n> > WHERE pid = pg_backend_pid();\n> > \n> > is the right approach.\n> \n> I’d rather we added a specific function. This is not really obvious.\n\nWell, we can document this query, add a function, or add a read-only\nGUC. I am not sure how we decide which one to use.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 14 May 2021 12:21:23 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "pá 14. 5. 2021 v 18:21 odesílatel Bruce Momjian <bruce@momjian.us> napsal:\n\n> On Fri, May 14, 2021 at 12:04:05PM -0400, Andrew Dunstan wrote:\n> > > On May 14, 2021, at 8:35 AM, Bruce Momjian <bruce@momjian.us> wrote:\n> > >\n> > > On Thu, May 13, 2021 at 09:41:42PM -0400, Bruce Momjian wrote:\n> > > I think keeping the output as 'auto', and documenting that this query\n> > > must be run to determine if the query id is being computed:\n> > >\n> > > SELECT query_id\n> > > FROM pg_stat_activity\n> > > WHERE pid = pg_backend_pid();\n> > >\n> > > is the right approach.\n> >\n> > I’d rather we added a specific function. This is not really obvious.\n>\n> Well, we can document this query, add a function, or add a read-only\n> GUC. I am not sure how we decide which one to use.\n>\n\nI though and I prefer read only GUC\n\nIt is easy to write \"show all\"\n\nPavel\n\n\n\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> If only the physical world exists, free will is an illusion.\n>\n>\n\npá 14. 5. 2021 v 18:21 odesílatel Bruce Momjian <bruce@momjian.us> napsal:On Fri, May 14, 2021 at 12:04:05PM -0400, Andrew Dunstan wrote:\n> > On May 14, 2021, at 8:35 AM, Bruce Momjian <bruce@momjian.us> wrote:\n> > \n> > On Thu, May 13, 2021 at 09:41:42PM -0400, Bruce Momjian wrote:\n> > I think keeping the output as 'auto', and documenting that this query\n> > must be run to determine if the query id is being computed:\n> > \n> >    SELECT query_id\n> >    FROM pg_stat_activity\n> >    WHERE pid = pg_backend_pid();\n> > \n> > is the right approach.\n> \n> I’d rather we added a specific function. This is not really obvious.\n\nWell, we can document this query, add a function, or add a read-only\nGUC.  I am not sure how we decide which one to use.I though and I prefer read only GUCIt is easy to write \"show all\"Pavel\n\n-- \n  Bruce Momjian  <bruce@momjian.us>        https://momjian.us\n  EDB                                      https://enterprisedb.com\n\n  If only the physical world exists, free will is an illusion.", "msg_date": "Fri, 14 May 2021 18:23:15 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Fri, May 14, 2021 at 12:21:23PM -0400, Bruce Momjian wrote:\n> On Fri, May 14, 2021 at 12:04:05PM -0400, Andrew Dunstan wrote:\n> > > On May 14, 2021, at 8:35 AM, Bruce Momjian <bruce@momjian.us> wrote:\n> > > \n> > > On Thu, May 13, 2021 at 09:41:42PM -0400, Bruce Momjian wrote:\n> > > I think keeping the output as 'auto', and documenting that this query\n> > > must be run to determine if the query id is being computed:\n> > > \n> > > SELECT query_id\n> > > FROM pg_stat_activity\n> > > WHERE pid = pg_backend_pid();\n> > > \n> > > is the right approach.\n> > \n> > I’d rather we added a specific function. This is not really obvious.\n> \n> Well, we can document this query, add a function, or add a read-only\n> GUC. I am not sure how we decide which one to use.\n\nI wonder if we should go with an SQL query now (no new API needed) and\nthen add a GUC once we decide on how extensions can register that they\nare generating the query id, so the GUC can report the generating\nsource, not just a boolean. The core server can also register as the\nsource.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 14 May 2021 13:28:38 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "pá 14. 5. 2021 v 19:28 odesílatel Bruce Momjian <bruce@momjian.us> napsal:\n\n> On Fri, May 14, 2021 at 12:21:23PM -0400, Bruce Momjian wrote:\n> > On Fri, May 14, 2021 at 12:04:05PM -0400, Andrew Dunstan wrote:\n> > > > On May 14, 2021, at 8:35 AM, Bruce Momjian <bruce@momjian.us> wrote:\n> > > >\n> > > > On Thu, May 13, 2021 at 09:41:42PM -0400, Bruce Momjian wrote:\n> > > > I think keeping the output as 'auto', and documenting that this query\n> > > > must be run to determine if the query id is being computed:\n> > > >\n> > > > SELECT query_id\n> > > > FROM pg_stat_activity\n> > > > WHERE pid = pg_backend_pid();\n> > > >\n> > > > is the right approach.\n> > >\n> > > I’d rather we added a specific function. This is not really obvious.\n> >\n> > Well, we can document this query, add a function, or add a read-only\n> > GUC. I am not sure how we decide which one to use.\n>\n> I wonder if we should go with an SQL query now (no new API needed) and\n> then add a GUC once we decide on how extensions can register that they\n> are generating the query id, so the GUC can report the generating\n> source, not just a boolean. The core server can also register as the\n> source.\n>\n\nI have no problem with it. This is an internal feature and can be enhanced\n(fixed) in time without problems.\n\nPavel\n\n\n\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> If only the physical world exists, free will is an illusion.\n>\n>\n\npá 14. 5. 2021 v 19:28 odesílatel Bruce Momjian <bruce@momjian.us> napsal:On Fri, May 14, 2021 at 12:21:23PM -0400, Bruce Momjian wrote:\n> On Fri, May 14, 2021 at 12:04:05PM -0400, Andrew Dunstan wrote:\n> > > On May 14, 2021, at 8:35 AM, Bruce Momjian <bruce@momjian.us> wrote:\n> > > \n> > > On Thu, May 13, 2021 at 09:41:42PM -0400, Bruce Momjian wrote:\n> > > I think keeping the output as 'auto', and documenting that this query\n> > > must be run to determine if the query id is being computed:\n> > > \n> > >    SELECT query_id\n> > >    FROM pg_stat_activity\n> > >    WHERE pid = pg_backend_pid();\n> > > \n> > > is the right approach.\n> > \n> > I’d rather we added a specific function. This is not really obvious.\n> \n> Well, we can document this query, add a function, or add a read-only\n> GUC.  I am not sure how we decide which one to use.\n\nI wonder if we should go with an SQL query now (no new API needed) and\nthen add a GUC once we decide on how extensions can register that they\nare generating the query id, so the GUC can report the generating\nsource, not just a boolean.  The core server can also register as the\nsource.I have no problem with it. This is an internal feature and can be enhanced (fixed) in time without problems.Pavel \n\n-- \n  Bruce Momjian  <bruce@momjian.us>        https://momjian.us\n  EDB                                      https://enterprisedb.com\n\n  If only the physical world exists, free will is an illusion.", "msg_date": "Fri, 14 May 2021 19:39:46 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On 2021-May-14, Julien Rouhaud wrote:\n\n> On Fri, May 14, 2021 at 12:20:00PM +0900, Fujii Masao wrote:\n\n> > +void\n> > +EnableQueryId(void)\n> > +{\n> > +\tif (compute_query_id == COMPUTE_QUERY_ID_AUTO)\n> > +\t\tauto_query_id_enabled = true;\n> > \n> > Shouldn't EnableQueryId() enable auto_query_id_enabled whatever compute_query_id is?\n> > Otherwise, for example, the following scenario can happen and it's a bit strange.\n> > \n> > 1. The server starts up with shared_preload_libraries=pg_stat_statements and compute_query_id=on\n> > 2. compute_query_id is set to auto and the configuration file is reloaded\n> > Then, even though compute_query_id is auto and pg_stat_statements is loaded,\n> > query ids are not computed and no queries are tracked by pg_stat_statements.\n> \n> +1.\n\nThat makes sense. Done in this version.\n\nI took out the new WARNING in pg_stat_statements. It's not clear to me\nthat that's terribly useful (it stops working as soon as you have one\nquery in the pg_stat_statements stash and later disable everything).\nMaybe there is an useful warning to add, but I think it's independent of\nchanging the GUC behabior.\n\nI also made IsQueryIdEnabled() a static inline in queryjumble.h, to\navoid a function call at every site where we need that. Also did some\nlight doc editing.\n\nI think I should aim at pushing this tomorrow morning.\n\n> Note that if you switch from compute_query_id = off + custom\n> query_id + pg_stat_statements to compute_query_id = auto then thing will\n> immediately break (as we instruct third-party plugins authors to error out in\n> that case), which is way better than breaking at the next random service\n> restart.\n\nHmm, ok. I tested pg_queryid and that behavior of throwing an error\nseems quite unhelpful -- it basically makes every single query fail if\nyou set things wrong. I think that instruction is bogus and should be\nreconsidered. Maybe pg_queryid could use a custom Boolean GUC that\ntells it to overwrite the core query_id if that is enabled, or to just\nsilently do nothing in that case.\n\n\n\nWhile thinking about this patch it occurred to that an useful gadget\nmight be to let pg_stat_statements be loaded always, but with\ncompute_query_id=false; so it's never active; except if you\n ALTER {DATABASE, USER} foo SET (compute_query_id=on);\nso that it's possible to enable it selectively. I think this doesn't\ncurrently achieve anything because pgss_store is always called\nregardless of query ID being active (so you'd always have at least one\nfunction call as performance penalty, only that the work would be for\nnaught), but that seems a simple change to make. I didn't look closely\nto see what other things would need patched.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\nMaybe there's lots of data loss but the records of data loss are also lost.\n(Lincoln Yeoh)", "msg_date": "Fri, 14 May 2021 19:50:13 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Fri, May 14, 2021 at 07:50:13PM -0400, Alvaro Herrera wrote:\n> +++ b/doc/src/sgml/config.sgml\n> @@ -7643,7 +7643,12 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;\n> identifier to be computed. Note that an external module can\n> alternatively be used if the in-core query identifier computation\n> method is not acceptable. In this case, in-core computation\n> - must be disabled. The default is <literal>off</literal>.\n> + must be always disabled.\n> + Valid values are <literal>off</literal> (always disabled),\n> + <literal>on</literal> (always enabled) and <literal>auto</literal>,\n> + which let modules such as <xref linkend=\"pgstatstatements\"/>\n> + automatically enable it.\n> + The default is <literal>auto</literal>.\n\nwhich lets\n\n> +/* True when a module requests query IDs and they're set auto */\n> +bool\t\tquery_id_enabled = false;\n\nDoes \"they're\" mean the GUC compute_query_id ?\n\n> +/*\n> + * This should only be called if IsQueryIdEnabled()\n> + * return true.\n> + */\n> JumbleState *\n> JumbleQuery(Query *query, const char *querytext)\n\nShould it Assert() that ?\n\nMaybe you should update this too ?\ndoc/src/sgml/release-14.sgml\n\n\n", "msg_date": "Fri, 14 May 2021 19:10:17 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Sat, May 15, 2021 at 7:50 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> I took out the new WARNING in pg_stat_statements. It's not clear to me\n> that that's terribly useful (it stops working as soon as you have one\n> query in the pg_stat_statements stash and later disable everything).\n\nIf no query_id is calculated and you have entries in\npg_stat_statements, it means someone deliberately deactivated\ncompute_query_id. In that case it's clear that they know the GUC\nexists, so there's no much point in warning them that they deactivated\nit I think.\n\n> Maybe there is an useful warning to add, but I think it's independent of\n> changing the GUC behabior.\n\nI'm fine with it.\n\n> > Note that if you switch from compute_query_id = off + custom\n> > query_id + pg_stat_statements to compute_query_id = auto then thing will\n> > immediately break (as we instruct third-party plugins authors to error out in\n> > that case), which is way better than breaking at the next random service\n> > restart.\n>\n> Hmm, ok. I tested pg_queryid and that behavior of throwing an error\n> seems quite unhelpful -- it basically makes every single query fail if\n> you set things wrong. I think that instruction is bogus and should be\n> reconsidered. Maybe pg_queryid could use a custom Boolean GUC that\n> tells it to overwrite the core query_id if that is enabled, or to just\n> silently do nothing in that case.\n\nUnless I'm missing something, if we remove that instruction it means\nthat we encourage users to dynamically change the query_id source\nwithout any safeguard, which will in the majority of case result in\nunwanted behavior, going from duplicated entries, poor performance in\npg_stat_statements if that leads to more evictions, or even totally\nbogus metrics if that leads to hash collision.\n\n> While thinking about this patch it occurred to that an useful gadget\n> might be to let pg_stat_statements be loaded always, but with\n> compute_query_id=false; so it's never active; except if you\n> ALTER {DATABASE, USER} foo SET (compute_query_id=on);\n> so that it's possible to enable it selectively. I think this doesn't\n> currently achieve anything because pgss_store is always called\n> regardless of query ID being active (so you'd always have at least one\n> function call as performance penalty, only that the work would be for\n> naught), but that seems a simple change to make. I didn't look closely\n> to see what other things would need patched.\n\nCouldn't it already be achieved with ALTER [ DATABASE | USER ] foo SET\npg_stat_statements.track = [ none | top | all ] ?\n\n\n", "msg_date": "Sat, 15 May 2021 16:09:32 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Sat, May 15, 2021 at 04:09:32PM +0800, Julien Rouhaud wrote:\n> > While thinking about this patch it occurred to that an useful gadget\n> > might be to let pg_stat_statements be loaded always, but with\n> > compute_query_id=false; so it's never active; except if you\n> > ALTER {DATABASE, USER} foo SET (compute_query_id=on);\n> > so that it's possible to enable it selectively. I think this doesn't\n> > currently achieve anything because pgss_store is always called\n> > regardless of query ID being active (so you'd always have at least one\n> > function call as performance penalty, only that the work would be for\n> > naught), but that seems a simple change to make. I didn't look closely\n> > to see what other things would need patched.\n> \n> Couldn't it already be achieved with ALTER [ DATABASE | USER ] foo SET\n> pg_stat_statements.track = [ none | top | all ] ?\n\nI am no longer the committer in charge of this feature, but I would like\nto remind the group that beta1 will be wrapped on Monday, and it is hard\nto change non-read-only GUCs after beta since the settings are embedded\nin postgresql.conf. There is also a release notes item that probably\nwill need to be adjusted.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Sat, 15 May 2021 10:00:25 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Sat, May 15, 2021 at 10:00 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> I am no longer the committer in charge of this feature, but I would like\n> to remind the group that beta1 will be wrapped on Monday, and it is hard\n> to change non-read-only GUCs after beta since the settings are embedded\n> in postgresql.conf. There is also a release notes item that probably\n> will need to be adjusted.\n\nIt seems that everyone agrees on the definition of compute_query_id in\nÁlvaro's v4 patch (module Justin's comments) so this could be\ncommitted before the beta1. If the safeguards for custom query_id or\nGUC misconfiguration have to be tweaked it shouldn't impact the GUC in\nany way.\n\n\n", "msg_date": "Sun, 16 May 2021 01:30:06 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On 2021-May-16, Julien Rouhaud wrote:\n\n> On Sat, May 15, 2021 at 10:00 PM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > I am no longer the committer in charge of this feature, but I would like\n> > to remind the group that beta1 will be wrapped on Monday, and it is hard\n> > to change non-read-only GUCs after beta since the settings are embedded\n> > in postgresql.conf. There is also a release notes item that probably\n> > will need to be adjusted.\n> \n> It seems that everyone agrees on the definition of compute_query_id in\n> �lvaro's v4 patch (module Justin's comments) so this could be\n> committed before the beta1. If the safeguards for custom query_id or\n> GUC misconfiguration have to be tweaked it shouldn't impact the GUC in\n> any way.\n\nPushed after adding the fixes from Justin. Note I didn't include the\nWARNING in pg_stat_statements when this is disabled; if anybody wants to\nargue for that, let's add it separately.\n\nI commented out the release notes para that is now wrong. What remains\nis this:\n\n Move query hash computation from pg_stat_statements to the core server (Julien Rouhaud)\n\nWe could perhaps add something like\n\n Extension pg_stat_statements continues to work without requiring any\n configuration changes.\n\nbut that seems a bit pointless. Or maybe\n\n Extension pg_stat_statements automatically enables query identifier\n computation if compute_query_id is set to auto. Third-party modules\n to compute query identifiers can be installed and used if this is set\n to off.\n\n\nI wonder why the initial line says \"query hash\" instead of \"query\nidentifier\". Do we want to say \"hash\" everywhere? Why didn't we name\nthe GUC \"compute_query_hash\" in that case?\n\n\nAnyway, let me remind you that it is pretty common to require initdb\nduring the beta period.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Sat, 15 May 2021 14:21:59 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Sat, May 15, 2021 at 02:21:59PM -0400, �lvaro Herrera wrote:\n> I commented out the release notes para that is now wrong. What remains\n> is this:\n> \n> Move query hash computation from pg_stat_statements to the core server (Julien Rouhaud)\n> \n> We could perhaps add something like\n> \n> Extension pg_stat_statements continues to work without requiring any\n> configuration changes.\n> \n> but that seems a bit pointless. Or maybe\n> \n> Extension pg_stat_statements automatically enables query identifier\n> computation if compute_query_id is set to auto. Third-party modules\n> to compute query identifiers can be installed and used if this is set\n> to off.\n> \n\nOK, new text is:\n\n\t<listitem>\n\t<!--\n\tAuthor: Bruce Momjian <bruce@momjian.us>\n\t2021-04-07 [5fd9dfa5f] Move pg_stat_statements query jumbling to core.\n\t-->\n\t\n\t<para>\n\tMove query hash computation from pg_stat_statements to the core\n\tserver (Julien Rouhaud)\n\t</para>\n\t\n\t<para>\n\tThe new server variable compute_query_id's default of 'auto' will\n\tautomatically enable query id computation when this extension\n\tis loaded.\n\t</para>\n\t</listitem>\n\nI also added Alvaro as an author of the compute_query_id item.\n\n> I wonder why the initial line says \"query hash\" instead of \"query\n> identifier\". Do we want to say \"hash\" everywhere? Why didn't we name\n> the GUC \"compute_query_hash\" in that case?\n\nIt is queryid (no underscore) in pg_stat_statements, which was a whole\ndifferent discussion. ;-)\n\n> Anyway, let me remind you that it is pretty common to require initdb\n> during the beta period.\n\nTrue.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Sat, 15 May 2021 17:32:58 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On 2021-May-15, Bruce Momjian wrote:\n\n> On Sat, May 15, 2021 at 02:21:59PM -0400, �lvaro Herrera wrote:\n\n> > I wonder why the initial line says \"query hash\" instead of \"query\n> > identifier\". Do we want to say \"hash\" everywhere? Why didn't we name\n> > the GUC \"compute_query_hash\" in that case?\n> \n> It is queryid (no underscore) in pg_stat_statements, which was a whole\n> different discussion. ;-)\n\nYeah, I realize that, but I wonder if we shouldn't use the term \"query\nidentifier\" instead of \"query hash\" in that paragraph.\n\n> I also added Alvaro as an author of the compute_query_id item.\n\nI've been wondering if I should ask to stick my name in other features I\nhelped get committed -- specifically the PQtrace() item and autovacuum\nfor partitioned tables. I'll go comment in the release notes thread.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Digital and video cameras have this adjustment and film cameras don't for the\nsame reason dogs and cats lick themselves: because they can.\" (Ken Rockwell)\n\n\n", "msg_date": "Sat, 15 May 2021 19:01:25 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Sat, May 15, 2021 at 05:32:58PM -0400, Bruce Momjian wrote:\n> OK, new text is:\n> \n> \t<listitem>\n> \t<!--\n> \tAuthor: Bruce Momjian <bruce@momjian.us>\n> \t2021-04-07 [5fd9dfa5f] Move pg_stat_statements query jumbling to core.\n> \t-->\n> \t\n> \t<para>\n> \tMove query hash computation from pg_stat_statements to the core\n> \tserver (Julien Rouhaud)\n> \t</para>\n> \t\n> \t<para>\n> \tThe new server variable compute_query_id's default of 'auto' will\n> \tautomatically enable query id computation when this extension\n> \tis loaded.\n> \t</para>\n> \t</listitem>\n> \n> I also added Alvaro as an author of the compute_query_id item.\n --------------------------------------------------------------\n\nBased on the commit message, adding Alvaro was incorrect, so I will\nrevert this change.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Sat, 15 May 2021 22:29:52 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On 2021-May-15, Bruce Momjian wrote:\n\n> On Sat, May 15, 2021 at 05:32:58PM -0400, Bruce Momjian wrote:\n\n> > I also added Alvaro as an author of the compute_query_id item.\n> --------------------------------------------------------------\n> \n> Based on the commit message, adding Alvaro was incorrect, so I will\n> revert this change.\n\nAgreed. My work on this one was janitorial.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Hay que recordar que la existencia en el cosmos, y particularmente la\nelaboraci�n de civilizaciones dentro de �l no son, por desgracia,\nnada id�licas\" (Ijon Tichy)\n\n\n", "msg_date": "Sat, 15 May 2021 23:23:25 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Le dim. 16 mai 2021 à 11:23, Alvaro Herrera <alvherre@alvh.no-ip.org> a\nécrit :\n\n> On 2021-May-15, Bruce Momjian wrote:\n>\n> > On Sat, May 15, 2021 at 05:32:58PM -0400, Bruce Momjian wrote:\n>\n> > > I also added Alvaro as an author of the compute_query_id item.\n> > --------------------------------------------------------------\n> >\n> > Based on the commit message, adding Alvaro was incorrect, so I will\n> > revert this change.\n>\n> Agreed. My work on this one was janitorial.\n>\n\nThanks a lot Alvaro and Bruce!\n\n>\n\nLe dim. 16 mai 2021 à 11:23, Alvaro Herrera <alvherre@alvh.no-ip.org> a écrit :On 2021-May-15, Bruce Momjian wrote:\n\n> On Sat, May 15, 2021 at 05:32:58PM -0400, Bruce Momjian wrote:\n\n> > I also added Alvaro as an author of the compute_query_id item.\n>   --------------------------------------------------------------\n> \n> Based on the commit message, adding Alvaro was incorrect, so I will\n> revert this change.\n\nAgreed.  My work on this one was janitorial.Thanks a lot Alvaro and Bruce!", "msg_date": "Sun, 16 May 2021 20:39:33 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Sat, May 15, 2021 at 07:01:25PM -0400, �lvaro Herrera wrote:\n> On 2021-May-15, Bruce Momjian wrote:\n> \n> > On Sat, May 15, 2021 at 02:21:59PM -0400, �lvaro Herrera wrote:\n> \n> > > I wonder why the initial line says \"query hash\" instead of \"query\n> > > identifier\". Do we want to say \"hash\" everywhere? Why didn't we name\n> > > the GUC \"compute_query_hash\" in that case?\n> > \n> > It is queryid (no underscore) in pg_stat_statements, which was a whole\n> > different discussion. ;-)\n> \n> Yeah, I realize that, but I wonder if we shouldn't use the term \"query\n> identifier\" instead of \"query hash\" in that paragraph.\n\nYes, of course, you are right --- updated text:\n\n\t<listitem>\n\t<!--\n\tAuthor: Bruce Momjian <bruce@momjian.us>\n\t2021-04-07 [4f0b0966c] Make use of in-core query id added by commit 5fd9dfa5f5\n\tAuthor: Bruce Momjian <bruce@momjian.us>\n\t2021-04-07 [f57a2f5e0] Add csvlog output for the new query_id value\n\tAuthor: Bruce Momjian <bruce@momjian.us>\n\t2021-04-20 [9660834dd] adjust query id feature to use pg_stat_activity.query_id\n\tAuthor: Bruce Momjian <bruce@momjian.us>\n\t2021-05-03 [f7a97b6ec] Update query_id computation\n\tAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\n\t2021-05-15 [cafde58b3] Allow compute_query_id to be set to 'auto' and make it d\n\t-->\n\t\n\t<para>\n\tIf server variable compute_query_id is enabled, display the query\n\tid in pg_stat_activity, EXPLAIN VERBOSE, csvlog, and optionally\n\tin log_line_prefix (Julien Rouhaud)\n\t</para>\n\t\n\t<para>\n\tA query id computed by an extension will also be displayed.\n\t</para>\n\t</listitem>\n\n> \n> > I also added Alvaro as an author of the compute_query_id item.\n> \n> I've been wondering if I should ask to stick my name in other features I\n> helped get committed -- specifically the PQtrace() item and autovacuum\n> for partitioned tables. I'll go comment in the release notes thread.\n\nYes, done.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Sun, 16 May 2021 23:12:34 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Sat, May 15, 2021 at 11:23:25PM -0400, �lvaro Herrera wrote:\n> On 2021-May-15, Bruce Momjian wrote:\n> \n> > On Sat, May 15, 2021 at 05:32:58PM -0400, Bruce Momjian wrote:\n> \n> > > I also added Alvaro as an author of the compute_query_id item.\n> > --------------------------------------------------------------\n> > \n> > Based on the commit message, adding Alvaro was incorrect, so I will\n> > revert this change.\n> \n> Agreed. My work on this one was janitorial.\n\nOK, removed, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Sun, 16 May 2021 23:12:51 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Sun, May 16, 2021 at 08:39:33PM +0800, Julien Rouhaud wrote:\n> Le dim. 16 mai 2021 � 11:23, Alvaro Herrera <alvherre@alvh.no-ip.org> a �crit�:\n> \n> On 2021-May-15, Bruce Momjian wrote:\n> \n> > On Sat, May 15, 2021 at 05:32:58PM -0400, Bruce Momjian wrote:\n> \n> > > I also added Alvaro as an author of the compute_query_id item.\n> >� �--------------------------------------------------------------\n> >\n> > Based on the commit message, adding Alvaro was incorrect, so I will\n> > revert this change.\n> \n> Agreed.� My work on this one was janitorial.\n> \n> \n> Thanks a lot Alvaro and Bruce!�\n\nWe are going to get to the goal line, one way or the other! ;-)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Sun, 16 May 2021 23:13:24 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "\nOn 5/16/21 11:13 PM, Bruce Momjian wrote:\n> On Sun, May 16, 2021 at 08:39:33PM +0800, Julien Rouhaud wrote:\n>> Le dim. 16 mai 2021 à 11:23, Alvaro Herrera <alvherre@alvh.no-ip.org> a écrit :\n>>\n>> On 2021-May-15, Bruce Momjian wrote:\n>>\n>> > On Sat, May 15, 2021 at 05:32:58PM -0400, Bruce Momjian wrote:\n>>\n>> > > I also added Alvaro as an author of the compute_query_id item.\n>> >   --------------------------------------------------------------\n>> >\n>> > Based on the commit message, adding Alvaro was incorrect, so I will\n>> > revert this change.\n>>\n>> Agreed.  My work on this one was janitorial.\n>>\n>>\n>> Thanks a lot Alvaro and Bruce! \n> We are going to get to the goal line, one way or the other! ;-)\n\n\n\nI've discussed this with Alvaro. He's not planning to do anything more\nregarding this and I think we can close the open item.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 21 May 2021 14:19:13 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "On Fri, May 21, 2021 at 02:19:13PM -0400, Andrew Dunstan wrote:\n> \n> On 5/16/21 11:13 PM, Bruce Momjian wrote:\n> > On Sun, May 16, 2021 at 08:39:33PM +0800, Julien Rouhaud wrote:\n> >> Le dim. 16 mai 2021 � 11:23, Alvaro Herrera <alvherre@alvh.no-ip.org> a �crit�:\n> >>\n> >> On 2021-May-15, Bruce Momjian wrote:\n> >>\n> >> > On Sat, May 15, 2021 at 05:32:58PM -0400, Bruce Momjian wrote:\n> >>\n> >> > > I also added Alvaro as an author of the compute_query_id item.\n> >> >� �--------------------------------------------------------------\n> >> >\n> >> > Based on the commit message, adding Alvaro was incorrect, so I will\n> >> > revert this change.\n> >>\n> >> Agreed.� My work on this one was janitorial.\n> >>\n> >>\n> >> Thanks a lot Alvaro and Bruce!�\n> > We are going to get to the goal line, one way or the other! ;-)\n> \n> \n> \n> I've discussed this with Alvaro. He's not planning to do anything more\n> regarding this and I think we can close the open item.\n\nWorks for me.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 21 May 2021 14:27:29 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" }, { "msg_contents": "Le sam. 22 mai 2021 à 02:27, Bruce Momjian <bruce@momjian.us> a écrit :\n\n> On Fri, May 21, 2021 at 02:19:13PM -0400, Andrew Dunstan wrote:\n> >\n> > On 5/16/21 11:13 PM, Bruce Momjian wrote:\n> > > On Sun, May 16, 2021 at 08:39:33PM +0800, Julien Rouhaud wrote:\n> > >> Le dim. 16 mai 2021 à 11:23, Alvaro Herrera <alvherre@alvh.no-ip.org>\n> a écrit :\n> > >>\n> > >> On 2021-May-15, Bruce Momjian wrote:\n> > >>\n> > >> > On Sat, May 15, 2021 at 05:32:58PM -0400, Bruce Momjian wrote:\n> > >>\n> > >> > > I also added Alvaro as an author of the compute_query_id item.\n> > >> > --------------------------------------------------------------\n> > >> >\n> > >> > Based on the commit message, adding Alvaro was incorrect, so I\n> will\n> > >> > revert this change.\n> > >>\n> > >> Agreed. My work on this one was janitorial.\n> > >>\n> > >>\n> > >> Thanks a lot Alvaro and Bruce!\n> > > We are going to get to the goal line, one way or the other! ;-)\n> >\n> >\n> >\n> > I've discussed this with Alvaro. He's not planning to do anything more\n> > regarding this and I think we can close the open item.\n>\n> Works for me.\n>\n\nworks for me too.\n\n>\n\nLe sam. 22 mai 2021 à 02:27, Bruce Momjian <bruce@momjian.us> a écrit :On Fri, May 21, 2021 at 02:19:13PM -0400, Andrew Dunstan wrote:\n> \n> On 5/16/21 11:13 PM, Bruce Momjian wrote:\n> > On Sun, May 16, 2021 at 08:39:33PM +0800, Julien Rouhaud wrote:\n> >> Le dim. 16 mai 2021 à 11:23, Alvaro Herrera <alvherre@alvh.no-ip.org> a écrit :\n> >>\n> >>     On 2021-May-15, Bruce Momjian wrote:\n> >>\n> >>     > On Sat, May 15, 2021 at 05:32:58PM -0400, Bruce Momjian wrote:\n> >>\n> >>     > > I also added Alvaro as an author of the compute_query_id item.\n> >>     >   --------------------------------------------------------------\n> >>     >\n> >>     > Based on the commit message, adding Alvaro was incorrect, so I will\n> >>     > revert this change.\n> >>\n> >>     Agreed.  My work on this one was janitorial.\n> >>\n> >>\n> >> Thanks a lot Alvaro and Bruce! \n> > We are going to get to the goal line, one way or the other!  ;-)\n> \n> \n> \n> I've discussed this with Alvaro. He's not planning to do anything more\n> regarding this and I think we can close the open item.\n\nWorks for me.works for me too.", "msg_date": "Sat, 22 May 2021 12:35:05 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compute_query_id and pg_stat_statements" } ]
[ { "msg_contents": "Hackers,\n\nLast year, when working on making compactify_tuples() go faster for\n19c60ad69, I did quite a bit of benchmarking of the recovery process.\nThe next thing that was slow after compactify_tuples() was the hash\nlookups done in smgropen().\n\nCurrently, we use dynahash hash tables to store the SMgrRelation so we\ncan perform fast lookups by RelFileNodeBackend. However, I had in mind\nthat a simplehash table might perform better. So I tried it...\n\nThe attached converts the hash table lookups done in smgr.c to use\nsimplehash instead of dynahash.\n\nThis does require a few changes in simplehash.h to make it work. The\nreason being is that RelationData.rd_smgr points directly into the\nhash table entries. This works ok for dynahash as that hash table\nimplementation does not do any reallocations of existing items or move\nany items around in the table, however, simplehash moves entries\naround all the time, so we can't point any pointers directly at the\nhash entries and expect them to be valid after adding or removing\nanything else from the table.\n\nTo work around that, I've just made an additional type that serves as\nthe hash entry type that has a pointer to the SMgrRelationData along\nwith the hash status and hash value. It's just 16 bytes (or 12 on\n32-bit machines). I opted to keep the hash key in the\nSMgrRelationData rather than duplicating it as it keeps the SMgrEntry\nstruct nice and small. We only need to dereference the SMgrRelation\npointer when we find an entry with the same hash value. The chances\nare quite good that an entry with the same hash value is the one that\nwe want, so any additional dereferences to compare the key are not\ngoing to happen very often.\n\nI did experiment with putting the hash key in SMgrEntry and found it\nto be quite a bit slower. I also did try to use hash_bytes() but\nfound building a hash function that uses murmurhash32 to be quite a\nbit faster.\n\nBenchmarking\n===========\n\nI did some of that. It made my test case about 10% faster.\n\nThe test case was basically inserting 100 million rows one at a time\ninto a hash partitioned table with 1000 partitions and 2 int columns\nand a primary key on one of those columns. It was about 12GB of WAL. I\nused a hash partitioned table in the hope to create a fairly\nrandom-looking SMgr hash table access pattern. Hopefully something\nsimilar to what might happen in the real world.\n\nOver 10 runs of recovery, master took an average of 124.89 seconds.\nThe patched version took 113.59 seconds. About 10% faster.\n\nI bumped shared_buffers up to 10GB, max_wal_size to 20GB and\ncheckpoint_timeout to 60 mins.\n\nTo make the benchmark more easily to repeat I patched with the\nattached recovery_panic.patch.txt. This just PANICs at the end of\nrecovery so that the database shuts down before performing the end of\nrecovery checkpoint. Just start the database up again to do another\nrun.\n\nI did 10 runs. The end of recovery log message reported:\n\nmaster (aa271209f)\nCPU: user: 117.89 s, system: 5.70 s, elapsed: 123.65 s\nCPU: user: 117.81 s, system: 5.74 s, elapsed: 123.62 s\nCPU: user: 119.39 s, system: 5.75 s, elapsed: 125.20 s\nCPU: user: 117.98 s, system: 4.39 s, elapsed: 122.41 s\nCPU: user: 117.92 s, system: 4.79 s, elapsed: 122.76 s\nCPU: user: 119.84 s, system: 4.75 s, elapsed: 124.64 s\nCPU: user: 120.60 s, system: 5.82 s, elapsed: 126.49 s\nCPU: user: 118.74 s, system: 5.71 s, elapsed: 124.51 s\nCPU: user: 124.29 s, system: 6.79 s, elapsed: 131.14 s\nCPU: user: 118.73 s, system: 5.67 s, elapsed: 124.47 s\n\nmaster + v1 patch\nCPU: user: 106.90 s, system: 4.45 s, elapsed: 111.39 s\nCPU: user: 107.31 s, system: 5.98 s, elapsed: 113.35 s\nCPU: user: 107.14 s, system: 5.58 s, elapsed: 112.77 s\nCPU: user: 105.79 s, system: 5.64 s, elapsed: 111.48 s\nCPU: user: 105.78 s, system: 5.80 s, elapsed: 111.63 s\nCPU: user: 113.18 s, system: 6.21 s, elapsed: 119.45 s\nCPU: user: 107.74 s, system: 4.57 s, elapsed: 112.36 s\nCPU: user: 107.42 s, system: 4.62 s, elapsed: 112.09 s\nCPU: user: 106.54 s, system: 4.65 s, elapsed: 111.24 s\nCPU: user: 113.24 s, system: 6.86 s, elapsed: 120.16 s\n\nI wrote this patch a few days ago. I'm only posting it now as I know a\ncouple of other people have expressed an interest in working on this.\nI didn't really want any duplicate efforts, so thought I'd better post\nit now before someone else goes and writes a similar patch.\n\nI'll park this here and have another look at it when the PG15 branch opens.\n\nDavid", "msg_date": "Sun, 25 Apr 2021 03:58:38 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "David Rowley писал 2021-04-24 18:58:\n> Hackers,\n> \n> Last year, when working on making compactify_tuples() go faster for\n> 19c60ad69, I did quite a bit of benchmarking of the recovery process.\n> The next thing that was slow after compactify_tuples() was the hash\n> lookups done in smgropen().\n> \n> Currently, we use dynahash hash tables to store the SMgrRelation so we\n> can perform fast lookups by RelFileNodeBackend. However, I had in mind\n> that a simplehash table might perform better. So I tried it...\n> \n> The attached converts the hash table lookups done in smgr.c to use\n> simplehash instead of dynahash.\n> \n> This does require a few changes in simplehash.h to make it work. The\n> reason being is that RelationData.rd_smgr points directly into the\n> hash table entries. This works ok for dynahash as that hash table\n> implementation does not do any reallocations of existing items or move\n> any items around in the table, however, simplehash moves entries\n> around all the time, so we can't point any pointers directly at the\n> hash entries and expect them to be valid after adding or removing\n> anything else from the table.\n> \n> To work around that, I've just made an additional type that serves as\n> the hash entry type that has a pointer to the SMgrRelationData along\n> with the hash status and hash value. It's just 16 bytes (or 12 on\n> 32-bit machines). I opted to keep the hash key in the\n> SMgrRelationData rather than duplicating it as it keeps the SMgrEntry\n> struct nice and small. We only need to dereference the SMgrRelation\n> pointer when we find an entry with the same hash value. The chances\n> are quite good that an entry with the same hash value is the one that\n> we want, so any additional dereferences to compare the key are not\n> going to happen very often.\n> \n> I did experiment with putting the hash key in SMgrEntry and found it\n> to be quite a bit slower. I also did try to use hash_bytes() but\n> found building a hash function that uses murmurhash32 to be quite a\n> bit faster.\n> \n> Benchmarking\n> ===========\n> \n> I did some of that. It made my test case about 10% faster.\n> \n> The test case was basically inserting 100 million rows one at a time\n> into a hash partitioned table with 1000 partitions and 2 int columns\n> and a primary key on one of those columns. It was about 12GB of WAL. I\n> used a hash partitioned table in the hope to create a fairly\n> random-looking SMgr hash table access pattern. Hopefully something\n> similar to what might happen in the real world.\n> \n> Over 10 runs of recovery, master took an average of 124.89 seconds.\n> The patched version took 113.59 seconds. About 10% faster.\n> \n> I bumped shared_buffers up to 10GB, max_wal_size to 20GB and\n> checkpoint_timeout to 60 mins.\n> \n> To make the benchmark more easily to repeat I patched with the\n> attached recovery_panic.patch.txt. This just PANICs at the end of\n> recovery so that the database shuts down before performing the end of\n> recovery checkpoint. Just start the database up again to do another\n> run.\n> \n> I did 10 runs. The end of recovery log message reported:\n> \n> master (aa271209f)\n> CPU: user: 117.89 s, system: 5.70 s, elapsed: 123.65 s\n> CPU: user: 117.81 s, system: 5.74 s, elapsed: 123.62 s\n> CPU: user: 119.39 s, system: 5.75 s, elapsed: 125.20 s\n> CPU: user: 117.98 s, system: 4.39 s, elapsed: 122.41 s\n> CPU: user: 117.92 s, system: 4.79 s, elapsed: 122.76 s\n> CPU: user: 119.84 s, system: 4.75 s, elapsed: 124.64 s\n> CPU: user: 120.60 s, system: 5.82 s, elapsed: 126.49 s\n> CPU: user: 118.74 s, system: 5.71 s, elapsed: 124.51 s\n> CPU: user: 124.29 s, system: 6.79 s, elapsed: 131.14 s\n> CPU: user: 118.73 s, system: 5.67 s, elapsed: 124.47 s\n> \n> master + v1 patch\n> CPU: user: 106.90 s, system: 4.45 s, elapsed: 111.39 s\n> CPU: user: 107.31 s, system: 5.98 s, elapsed: 113.35 s\n> CPU: user: 107.14 s, system: 5.58 s, elapsed: 112.77 s\n> CPU: user: 105.79 s, system: 5.64 s, elapsed: 111.48 s\n> CPU: user: 105.78 s, system: 5.80 s, elapsed: 111.63 s\n> CPU: user: 113.18 s, system: 6.21 s, elapsed: 119.45 s\n> CPU: user: 107.74 s, system: 4.57 s, elapsed: 112.36 s\n> CPU: user: 107.42 s, system: 4.62 s, elapsed: 112.09 s\n> CPU: user: 106.54 s, system: 4.65 s, elapsed: 111.24 s\n> CPU: user: 113.24 s, system: 6.86 s, elapsed: 120.16 s\n> \n> I wrote this patch a few days ago. I'm only posting it now as I know a\n> couple of other people have expressed an interest in working on this.\n> I didn't really want any duplicate efforts, so thought I'd better post\n> it now before someone else goes and writes a similar patch.\n> \n> I'll park this here and have another look at it when the PG15 branch \n> opens.\n> \n> David\n\nHi, David\n\nIt is quite interesting result. Simplehash being open-addressing with\nlinear probing is friendly for cpu cache. I'd recommend to define\nSH_FILLFACTOR with value lower than default (0.9). I believe 0.75 is\nsuitable most for such kind of hash table.\n\n> +\t/* rotate hashkey left 1 bit at each step */\n> +\thashkey = (hashkey << 1) | ((hashkey & 0x80000000) ? 1 : 0);\n> +\thashkey ^= murmurhash32((uint32) rnode->node.dbNode);\n\nWhy do you use so strange rotation expression? I know compillers are \nable\nto translage `h = (h << 1) | (h >> 31)` to single rotate instruction.\nDo they recognize construction in your code as well?\n\nYour construction looks more like \"multiplate-modulo\" operation in 32bit\nGalois field . It is widely used operation in cryptographic, but it is\nused modulo some primitive polynomial, and 0x100000001 is not such\npolynomial. 0x1000000c5 is, therefore it should be:\n\n hashkey = (hashkey << 1) | ((hashkey & 0x80000000) ? 0xc5 : 0);\nor\n hashkey = (hashkey << 1) | ((uint32)((int32)hashkey >> 31) & 0xc5);\n\nBut why don't just use hash_combine(uint32 a, uint32 b) instead (defined\nin hashfn.h)? Yep, it could be a bit slower, but is it critical?\n\n> - *\tsmgrclose() -- Close and delete an SMgrRelation object.\n> + *\tsmgrclose() -- Close and delete an SMgrRelation object but don't\n> + *\tremove from the SMgrRelationHash table.\n\nI believe `smgrclose_internal()` should be in this comment.\n\nStill I don't believe it worth to separate smgrclose_internal from\nsmgrclose. Is there measurable performance improvement from this\nchange? Even if there is, it will be lesser with SH_FILLFACTOR 0.75 .\n\nAs well I don't support modification simplehash.h for \nSH_ENTRY_INITIALIZER,\nSH_ENTRY_CLEANUP and SH_TRUNCATE. The initialization could comfortably\nlive in smgropen and the cleanup in smgrclose. And then SH_TRUNCATE\ndoesn't mean much.\n\nSummary:\n\nregards,\nYura Sokolov", "msg_date": "Sun, 25 Apr 2021 01:27:24 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "Thanks for having a look at this.\n\n \"On Sun, 25 Apr 2021 at 10:27, Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n>\n> It is quite interesting result. Simplehash being open-addressing with\n> linear probing is friendly for cpu cache. I'd recommend to define\n> SH_FILLFACTOR with value lower than default (0.9). I believe 0.75 is\n> suitable most for such kind of hash table.\n\nYou might be right there, although, with the particular benchmark I'm\nusing the size of the table does not change as a result of that. I'd\nneed to experiment with varying numbers of relations to see if\ndropping the fillfactor helps or hinders performance.\n\nFWIW, the hash stats at the end of recovery are:\n\nLOG: redo done at 3/C6E34F0 system usage: CPU: user: 107.00 s,\nsystem: 5.61 s, elapsed: 112.67 s\nLOG: size: 4096, members: 2032, filled: 0.496094, total chain: 997,\nmax chain: 5, avg chain: 0.490650, total_collisions: 422,\nmax_collisions: 3, avg_collisions: 0.207677\n\nPerhaps if try using a number of relations somewhere between 2048 *\n0.75 and 2048 * 0.9 then I might see some gains. Because I have 2032,\nthe hash table grew up to 4096 buckets.\n\nI did a quick test dropping the fillfactor down to 0.4. The aim there\nwas just to see if having 8192 buckets in this test would make it\nfaster or slower\n\nLOG: redo done at 3/C6E34F0 system usage: CPU: user: 109.61 s,\nsystem: 4.28 s, elapsed: 113.93 s\nLOG: size: 8192, members: 2032, filled: 0.248047, total chain: 303,\nmax chain: 2, avg chain: 0.149114, total_collisions: 209,\nmax_collisions: 2, avg_collisions: 0.102854\n\nit was slightly slower. I guess since the SMgrEntry is just 16 bytes\nwide that 4 of these will sit on each cache line which means there is\na 75% chance that the next bucket over is on the same cache line.\nSince the average chain length is just 0.49 then we'll mostly just\nneed to look at a single cache line to find the entry with the correct\nhash key.\n\n> > + /* rotate hashkey left 1 bit at each step */\n> > + hashkey = (hashkey << 1) | ((hashkey & 0x80000000) ? 1 : 0);\n> > + hashkey ^= murmurhash32((uint32) rnode->node.dbNode);\n>\n> Why do you use so strange rotation expression? I know compillers are\n> able\n> to translage `h = (h << 1) | (h >> 31)` to single rotate instruction.\n> Do they recognize construction in your code as well?\n\nNot sure about all compilers, I only checked the earliest version of\nclang and gcc at godbolt.org and they both use a single \"rol\"\ninstruction. https://godbolt.org/z/1GqdE6T3q\n\n> Your construction looks more like \"multiplate-modulo\" operation in 32bit\n> Galois field . It is widely used operation in cryptographic, but it is\n> used modulo some primitive polynomial, and 0x100000001 is not such\n> polynomial. 0x1000000c5 is, therefore it should be:\n>\n> hashkey = (hashkey << 1) | ((hashkey & 0x80000000) ? 0xc5 : 0);\n> or\n> hashkey = (hashkey << 1) | ((uint32)((int32)hashkey >> 31) & 0xc5);\n\nThat does not really make sense to me. If you're shifting a 32-bit\nvariable left 31 places then why would you AND with 0xc5? The only\npossible result is 1 or 0 depending on if the most significant bit is\non or off. I see gcc and clang are unable to optimise that into an\n\"rol\" instruction. If I swap the \"0xc5\" for \"1\", then they're able to\noptimise the expression.\n\n> But why don't just use hash_combine(uint32 a, uint32 b) instead (defined\n> in hashfn.h)? Yep, it could be a bit slower, but is it critical?\n\nI had that function in the corner of my eye when writing this, but\nTBH, the hash function performance was just too big a factor to slow\nit down any further by using the more expensive hash_combine()\nfunction. I saw pretty good performance gains from writing my own hash\nfunction rather than using hash_bytes(). I didn't want to detract from\nthat by using hash_combine(). Rotating the bits left 1 slot seems\ngood enough for hash join and hash aggregate, so I don't have any\nreason to believe it's a bad way to combine the hash values. Do you?\n\nIf you grep the source for \"hashkey = (hashkey << 1) | ((hashkey &\n0x80000000) ? 1 : 0);\", then you'll see where else we do the same\nrotate left trick.\n\n> > - * smgrclose() -- Close and delete an SMgrRelation object.\n> > + * smgrclose() -- Close and delete an SMgrRelation object but don't\n> > + * remove from the SMgrRelationHash table.\n>\n> I believe `smgrclose_internal()` should be in this comment.\n\nOops. Yeah, that's a mistake.\n\n> Still I don't believe it worth to separate smgrclose_internal from\n> smgrclose. Is there measurable performance improvement from this\n> change? Even if there is, it will be lesser with SH_FILLFACTOR 0.75 .\n\nThe reason I did that is due to the fact that smgrcloseall() loops\nover the entire hash table and removes each entry one by one. The\nproblem is that if I do a smgrtable_delete or smgrtable_delete_item in\nthat loop then I'd need to restart the loop each time. Be aware that\na simplehash delete can move entries earlier in the table, so it might\ncause us to miss entries during the loop. Restarting the loop each\niteration is not going to be very efficient, so instead, I opted to\nmake a version of smgrclose() that does not remove from the table so\nthat I can just wipe out all table entries at the end of the loop. I\ncalled that smgrclose_internal(). Maybe there's a better name, but I\ndon't really see any realistic way of not having some version that\nskips the hash table delete. I was hoping the 5 line comment I added\nto smgrcloseall() would explain the reason for the code being written\nway.\n\nAn additional small benefit is that smgrclosenode() can get away with\na single hashtable lookup rather than having to lookup the entry again\nwith smgrtable_delete(). Using smgrtable_delete_item() deletes by\nbucket rather than key value which should be a good bit faster in many\ncases. I think the SH_ENTRY_CLEANUP macro is quite useful here as I\ndon't need to worry about NULLing out the smgr_owner in yet another\nlocation where I do a hash delete.\n\n> As well I don't support modification simplehash.h for\n> SH_ENTRY_INITIALIZER,\n> SH_ENTRY_CLEANUP and SH_TRUNCATE. The initialization could comfortably\n> live in smgropen and the cleanup in smgrclose. And then SH_TRUNCATE\n> doesn't mean much.\n\nCan you share what you've got in mind here?\n\nThe problem I'm solving with SH_ENTRY_INITIALIZER is the fact that in\nSH_INSERT_HASH_INTERNAL(), when we add a new item, we do entry->SH_KEY\n= key; to set the new entries key. Since I have SH_KEY defined as:\n\n#define SH_KEY data->smgr_rnode\n\nthen I need some way to allocate the memory for ->data before the key\nis set. Doing that in smrgopen() is too late. We've already crashed by\nthen for referencing uninitialised memory.\n\nI did try putting the key in SMgrEntry but found the performance to be\nquite a bit worse than keeping the SMgrEntry down to 16 bytes. That\nmakes sense to me as we only need to compare the key when we find an\nentry with the same hash value as the one we're looking for. There's a\npretty high chance of that being the entry we want. If I got my hash\nfunction right then the odds are about 1 in 4 billion of it not being\nthe one we want. The only additional price we pay when we get two\nentries with the same hash value is an additional pointer dereference\nand a key comparison.\n\nDavid\n\n\n", "msg_date": "Sun, 25 Apr 2021 14:23:26 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "David Rowley писал 2021-04-25 05:23:\n> Thanks for having a look at this.\n> \n> \"On Sun, 25 Apr 2021 at 10:27, Yura Sokolov <y.sokolov@postgrespro.ru> \n> wrote:\n>> \n>> It is quite interesting result. Simplehash being open-addressing with\n>> linear probing is friendly for cpu cache. I'd recommend to define\n>> SH_FILLFACTOR with value lower than default (0.9). I believe 0.75 is\n>> suitable most for such kind of hash table.\n> \n> You might be right there, although, with the particular benchmark I'm\n> using the size of the table does not change as a result of that. I'd\n> need to experiment with varying numbers of relations to see if\n> dropping the fillfactor helps or hinders performance.\n> \n> FWIW, the hash stats at the end of recovery are:\n> \n> LOG: redo done at 3/C6E34F0 system usage: CPU: user: 107.00 s,\n> system: 5.61 s, elapsed: 112.67 s\n> LOG: size: 4096, members: 2032, filled: 0.496094, total chain: 997,\n> max chain: 5, avg chain: 0.490650, total_collisions: 422,\n> max_collisions: 3, avg_collisions: 0.207677\n> \n> Perhaps if try using a number of relations somewhere between 2048 *\n> 0.75 and 2048 * 0.9 then I might see some gains. Because I have 2032,\n> the hash table grew up to 4096 buckets.\n> \n> I did a quick test dropping the fillfactor down to 0.4. The aim there\n> was just to see if having 8192 buckets in this test would make it\n> faster or slower\n> \n> LOG: redo done at 3/C6E34F0 system usage: CPU: user: 109.61 s,\n> system: 4.28 s, elapsed: 113.93 s\n> LOG: size: 8192, members: 2032, filled: 0.248047, total chain: 303,\n> max chain: 2, avg chain: 0.149114, total_collisions: 209,\n> max_collisions: 2, avg_collisions: 0.102854\n> \n> it was slightly slower.\n\nCertainly. That is because in unmodified case you've got fillfactor 0.49\nbecause table just grew. Below somewhat near 0.6 there is no gain in \nlower\nfillfactor. But if you test it when it closer to upper bound, you will\nnotice difference. Try to test it with 3600 nodes, for example, if\ngoing down to 1800 nodes is not possible.\n\n>> > + /* rotate hashkey left 1 bit at each step */\n>> > + hashkey = (hashkey << 1) | ((hashkey & 0x80000000) ? 1 : 0);\n>> > + hashkey ^= murmurhash32((uint32) rnode->node.dbNode);\n>> \n>> Why do you use so strange rotation expression? I know compillers are\n>> able\n>> to translage `h = (h << 1) | (h >> 31)` to single rotate instruction.\n>> Do they recognize construction in your code as well?\n> \n> Not sure about all compilers, I only checked the earliest version of\n> clang and gcc at godbolt.org and they both use a single \"rol\"\n> instruction. https://godbolt.org/z/1GqdE6T3q\n\nYep, looks like all compilers recognize such construction with single\nexception of old icc compiler (both 13.0.1 and 16.0.3): \nhttps://godbolt.org/z/qsrjY5Eof\nand all compilers recognize `(h << 1) | (h >> 31)` well\n\n>> Your construction looks more like \"multiplate-modulo\" operation in \n>> 32bit\n>> Galois field . It is widely used operation in cryptographic, but it is\n>> used modulo some primitive polynomial, and 0x100000001 is not such\n>> polynomial. 0x1000000c5 is, therefore it should be:\n>> \n>> hashkey = (hashkey << 1) | ((hashkey & 0x80000000) ? 0xc5 : 0);\n>> or\n>> hashkey = (hashkey << 1) | ((uint32)((int32)hashkey >> 31) & \n>> 0xc5);\n> \n> That does not really make sense to me. If you're shifting a 32-bit\n> variable left 31 places then why would you AND with 0xc5? The only\n> possible result is 1 or 0 depending on if the most significant bit is\n> on or off.\n\nThat is why there is cast to signed int before shifting: `(int32)hashkey \n >> 31`.\nShift is then also signed ie arithmetic, and results are 0 or \n0xffffffff.\n\n>> But why don't just use hash_combine(uint32 a, uint32 b) instead \n>> (defined\n>> in hashfn.h)? Yep, it could be a bit slower, but is it critical?\n> \n> I had that function in the corner of my eye when writing this, but\n> TBH, the hash function performance was just too big a factor to slow\n> it down any further by using the more expensive hash_combine()\n> function. I saw pretty good performance gains from writing my own hash\n> function rather than using hash_bytes(). I didn't want to detract from\n> that by using hash_combine(). Rotating the bits left 1 slot seems\n> good enough for hash join and hash aggregate, so I don't have any\n> reason to believe it's a bad way to combine the hash values. Do you?\n\nWell, if think a bit more, this hash values could be combined with using\njust addition: `hash(a) + hash(b) + hash(c)`.\n\nI thought more about consistency in a codebase. But looks like both ways\n(`hash_combine(a,b)` and `rotl(a,1)^b`) are used in a code.\n- hash_combine is in one time/three lines in hashTupleDesc at \ntupledesc.c\n- rotl+xor six times:\n-- three times/three lines in execGrouping.c with construction like \nyours\n-- three times in jsonb_util.c, multirangetypes.c and rangetypes.c with\n `(h << 1) | (h >> 31)`.\nTherefore I step down on recommendation in this place.\n\nLooks like it is possibility for micropatch to unify hash combining :-)\n\n> \n> If you grep the source for \"hashkey = (hashkey << 1) | ((hashkey &\n> 0x80000000) ? 1 : 0);\", then you'll see where else we do the same\n> rotate left trick.\n> \n>> > - * smgrclose() -- Close and delete an SMgrRelation object.\n>> > + * smgrclose() -- Close and delete an SMgrRelation object but don't\n>> > + * remove from the SMgrRelationHash table.\n>> \n>> I believe `smgrclose_internal()` should be in this comment.\n> \n> Oops. Yeah, that's a mistake.\n> \n>> Still I don't believe it worth to separate smgrclose_internal from\n>> smgrclose. Is there measurable performance improvement from this\n>> change? Even if there is, it will be lesser with SH_FILLFACTOR 0.75 .\n> \n> The reason I did that is due to the fact that smgrcloseall() loops\n> over the entire hash table and removes each entry one by one. The\n> problem is that if I do a smgrtable_delete or smgrtable_delete_item in\n> that loop then I'd need to restart the loop each time. Be aware that\n> a simplehash delete can move entries earlier in the table, so it might\n> cause us to miss entries during the loop. Restarting the loop each\n> iteration is not going to be very efficient, so instead, I opted to\n> make a version of smgrclose() that does not remove from the table so\n> that I can just wipe out all table entries at the end of the loop. I\n> called that smgrclose_internal().\n\nIf you read comments in SH_START_ITERATE, you'll see:\n\n * Search for the first empty element. As deletions during iterations \nare\n * supported, we want to start/end at an element that cannot be \naffected\n * by elements being shifted.\n\n * Iterate backwards, that allows the current element to be deleted, \neven\n * if there are backward shifts\n\nTherefore, it is safe to delete during iteration, and it doesn't lead \nnor\nrequire loop restart.\n\n> \n> An additional small benefit is that smgrclosenode() can get away with\n> a single hashtable lookup rather than having to lookup the entry again\n> with smgrtable_delete(). Using smgrtable_delete_item() deletes by\n> bucket rather than key value which should be a good bit faster in many\n> cases. I think the SH_ENTRY_CLEANUP macro is quite useful here as I\n> don't need to worry about NULLing out the smgr_owner in yet another\n> location where I do a hash delete.\n\nDoubtfully it makes sense since smgrclosenode is called only in\nLocalExecuteInvalidationMessage, ie when other backend drops some\nrelation. There is no useful performance gain from it.\n\n> \n>> As well I don't support modification simplehash.h for\n>> SH_ENTRY_INITIALIZER,\n>> SH_ENTRY_CLEANUP and SH_TRUNCATE. The initialization could comfortably\n>> live in smgropen and the cleanup in smgrclose. And then SH_TRUNCATE\n>> doesn't mean much.\n> \n> Can you share what you've got in mind here?\n> \n> The problem I'm solving with SH_ENTRY_INITIALIZER is the fact that in\n> SH_INSERT_HASH_INTERNAL(), when we add a new item, we do entry->SH_KEY\n> = key; to set the new entries key. Since I have SH_KEY defined as:\n> \n> #define SH_KEY data->smgr_rnode\n> \n> then I need some way to allocate the memory for ->data before the key\n> is set. Doing that in smrgopen() is too late. We've already crashed by\n> then for referencing uninitialised memory.\n\nOh, now I see.\nI could suggest work-around:\n- use entry->hash as a whole key value and manually resolve hash\n collision with chaining.\nBut it looks ugly: use hash table and still manually resolve collisions.\n\nTherefore perhaps SH_ENTRY_INITIALIZER has sense.\n\nBut SH_ENTRY_CLEANUP is abused in the patch: it is not symmetric to\nSH_ENTRY_INITIALIZER. It smells bad. `smgr_owner` is better cleaned\nin a way it is cleaned now in smgrclose because it is less obscure.\nAnd SH_ENTRY_CLEANUP should be just `pfree(a->data)`.\n\nAnd still no reason to have SH_TRUNCATE.\n\n> I did try putting the key in SMgrEntry but found the performance to be\n> quite a bit worse than keeping the SMgrEntry down to 16 bytes. That\n> makes sense to me as we only need to compare the key when we find an\n> entry with the same hash value as the one we're looking for. There's a\n> pretty high chance of that being the entry we want. If I got my hash\n> function right then the odds are about 1 in 4 billion of it not being\n> the one we want. The only additional price we pay when we get two\n> entries with the same hash value is an additional pointer dereference\n> and a key comparison.\n\nIt has sense: whole benefit of simplehash is cache locality, and\nit is gained with smaller entry.\n\nregards,\nYura Sokolov\n\n\n", "msg_date": "Sun, 25 Apr 2021 09:48:52 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "On Sun, 25 Apr 2021 at 18:48, Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> If you read comments in SH_START_ITERATE, you'll see:\n>\n> * Search for the first empty element. As deletions during iterations\n> are\n> * supported, we want to start/end at an element that cannot be\n> affected\n> * by elements being shifted.\n>\n> * Iterate backwards, that allows the current element to be deleted,\n> even\n> * if there are backward shifts\n>\n> Therefore, it is safe to delete during iteration, and it doesn't lead\n> nor\n> require loop restart.\n\nI had only skimmed that with a pre-loaded assumption that it wouldn't\nbe safe. I didn't do a very good job of reading it as I failed to\nnotice the lack of guarantees were about deleting items other than the\ncurrent one. I didn't consider the option of finding a free bucket\nthen looping backwards to avoid missing entries that are moved up\nduring a delete.\n\nWith that, I changed the patch to get rid of the SH_TRUNCATE and got\nrid of the smgrclose_internal which skips the hash delete. The code\nis much more similar to how it was now.\n\nIn regards to the hashing stuff. I added a new function to\npg_bitutils.h to rotate left and I'm using that instead of the other\nexpression that was taken from nodeHash.c\n\nFor the hash function, I've done some further benchmarking with:\n\n1) The attached v2 patch\n2) The attached + plus use_hash_combine.patch.txt which uses\nhash_combine() instead of pg_rotate_left32()ing the hashkey each time.\n3) The attached v2 with use_hash_bytes.patch.txt applied.\n4) Master\n\nI've also included the hash stats from each version of the hash function.\n\nI hope the numbers help indicate the reason I picked the hash function\nthat I did.\n\n1) v2 patch.\nCPU: user: 108.23 s, system: 6.97 s, elapsed: 115.63 s\nCPU: user: 114.78 s, system: 6.88 s, elapsed: 121.71 s\nCPU: user: 107.53 s, system: 5.70 s, elapsed: 113.28 s\nCPU: user: 108.43 s, system: 5.73 s, elapsed: 114.22 s\nCPU: user: 106.18 s, system: 5.73 s, elapsed: 111.96 s\nCPU: user: 108.04 s, system: 5.29 s, elapsed: 113.39 s\nCPU: user: 107.64 s, system: 5.64 s, elapsed: 113.34 s\nCPU: user: 106.64 s, system: 5.58 s, elapsed: 112.27 s\nCPU: user: 107.91 s, system: 5.40 s, elapsed: 113.36 s\nCPU: user: 115.35 s, system: 6.60 s, elapsed: 122.01 s\n\nMedian = 113.375 s\n\nLOG: size: 4096, members: 2032, filled: 0.496094, total chain: 997,\nmax chain: 5, avg chain: 0.490650, total_collisions: 422,\nmax_collisions: 3, avg_collisions: 0.207677\n\n2) v2 patch + use_hash_combine.patch.txt\nCPU: user: 113.22 s, system: 5.52 s, elapsed: 118.80 s\nCPU: user: 116.63 s, system: 5.87 s, elapsed: 122.56 s\nCPU: user: 115.33 s, system: 5.73 s, elapsed: 121.12 s\nCPU: user: 113.11 s, system: 5.61 s, elapsed: 118.78 s\nCPU: user: 112.56 s, system: 5.51 s, elapsed: 118.13 s\nCPU: user: 114.55 s, system: 5.80 s, elapsed: 120.40 s\nCPU: user: 121.79 s, system: 6.45 s, elapsed: 128.29 s\nCPU: user: 113.98 s, system: 4.50 s, elapsed: 118.52 s\nCPU: user: 113.24 s, system: 5.63 s, elapsed: 118.93 s\nCPU: user: 114.11 s, system: 5.60 s, elapsed: 119.78 s\n\nMedian = 119.355 s\n\nLOG: size: 4096, members: 2032, filled: 0.496094, total chain: 971,\nmax chain: 6, avg chain: 0.477854, total_collisions: 433,\nmax_collisions: 4, avg_collisions: 0.213091\n\n3) v2 patch + use_hash_bytes.patch.txt\nCPU: user: 120.87 s, system: 6.69 s, elapsed: 127.62 s\nCPU: user: 112.40 s, system: 4.68 s, elapsed: 117.14 s\nCPU: user: 113.19 s, system: 5.44 s, elapsed: 118.69 s\nCPU: user: 112.15 s, system: 4.73 s, elapsed: 116.93 s\nCPU: user: 111.10 s, system: 5.59 s, elapsed: 116.74 s\nCPU: user: 112.03 s, system: 5.74 s, elapsed: 117.82 s\nCPU: user: 113.69 s, system: 4.33 s, elapsed: 118.07 s\nCPU: user: 113.30 s, system: 4.19 s, elapsed: 117.55 s\nCPU: user: 112.77 s, system: 5.57 s, elapsed: 118.39 s\nCPU: user: 112.25 s, system: 4.59 s, elapsed: 116.88 s\n\nMedian = 117.685 s\n\nLOG: size: 4096, members: 2032, filled: 0.496094, total chain: 900,\nmax chain: 4, avg chain: 0.442913, total_collisions: 415,\nmax_collisions: 4, avg_collisions: 0.204232\n\n4) master\nCPU: user: 117.89 s, system: 5.7 s, elapsed: 123.65 s\nCPU: user: 117.81 s, system: 5.74 s, elapsed: 123.62 s\nCPU: user: 119.39 s, system: 5.75 s, elapsed: 125.2 s\nCPU: user: 117.98 s, system: 4.39 s, elapsed: 122.41 s\nCPU: user: 117.92 s, system: 4.79 s, elapsed: 122.76 s\nCPU: user: 119.84 s, system: 4.75 s, elapsed: 124.64 s\nCPU: user: 120.6 s, system: 5.82 s, elapsed: 126.49 s\nCPU: user: 118.74 s, system: 5.71 s, elapsed: 124.51 s\nCPU: user: 124.29 s, system: 6.79 s, elapsed: 131.14 s\nCPU: user: 118.73 s, system: 5.67 s, elapsed: 124.47 s\n\nMedian = 124.49 s\n\nYou can see that the bare v2 patch is quite a bit faster than any of\nthe alternatives. We'd be better of with hash_bytes than using\nhash_combine() both for performance and for the seemingly better job\nthe hash function does at reducing the hash collisions.\n\nDavid", "msg_date": "Mon, 26 Apr 2021 01:36:21 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "David Rowley писал 2021-04-25 16:36:\n> On Sun, 25 Apr 2021 at 18:48, Yura Sokolov <y.sokolov@postgrespro.ru> \n> wrote:\n>> If you read comments in SH_START_ITERATE, you'll see:\n>> \n>> * Search for the first empty element. As deletions during \n>> iterations\n>> are\n>> * supported, we want to start/end at an element that cannot be\n>> affected\n>> * by elements being shifted.\n>> \n>> * Iterate backwards, that allows the current element to be deleted,\n>> even\n>> * if there are backward shifts\n>> \n>> Therefore, it is safe to delete during iteration, and it doesn't lead\n>> nor\n>> require loop restart.\n> \n> I had only skimmed that with a pre-loaded assumption that it wouldn't\n> be safe. I didn't do a very good job of reading it as I failed to\n> notice the lack of guarantees were about deleting items other than the\n> current one. I didn't consider the option of finding a free bucket\n> then looping backwards to avoid missing entries that are moved up\n> during a delete.\n> \n> With that, I changed the patch to get rid of the SH_TRUNCATE and got\n> rid of the smgrclose_internal which skips the hash delete. The code\n> is much more similar to how it was now.\n> \n> In regards to the hashing stuff. I added a new function to\n> pg_bitutils.h to rotate left and I'm using that instead of the other\n> expression that was taken from nodeHash.c\n> \n> For the hash function, I've done some further benchmarking with:\n> \n> 1) The attached v2 patch\n> 2) The attached + plus use_hash_combine.patch.txt which uses\n> hash_combine() instead of pg_rotate_left32()ing the hashkey each time.\n> 3) The attached v2 with use_hash_bytes.patch.txt applied.\n> 4) Master\n> \n> I've also included the hash stats from each version of the hash \n> function.\n> \n> I hope the numbers help indicate the reason I picked the hash function\n> that I did.\n> \n> 1) v2 patch.\n> Median = 113.375 s\n> \n> LOG: size: 4096, members: 2032, filled: 0.496094, total chain: 997,\n> max chain: 5, avg chain: 0.490650, total_collisions: 422,\n> max_collisions: 3, avg_collisions: 0.207677\n> \n> 2) v2 patch + use_hash_combine.patch.txt\n> Median = 119.355 s\n> \n> LOG: size: 4096, members: 2032, filled: 0.496094, total chain: 971,\n> max chain: 6, avg chain: 0.477854, total_collisions: 433,\n> max_collisions: 4, avg_collisions: 0.213091\n> \n> 3) v2 patch + use_hash_bytes.patch.txt\n> Median = 117.685 s\n> \n> LOG: size: 4096, members: 2032, filled: 0.496094, total chain: 900,\n> max chain: 4, avg chain: 0.442913, total_collisions: 415,\n> max_collisions: 4, avg_collisions: 0.204232\n> \n> 4) master\n> Median = 124.49 s\n> \n> You can see that the bare v2 patch is quite a bit faster than any of\n> the alternatives. We'd be better of with hash_bytes than using\n> hash_combine() both for performance and for the seemingly better job\n> the hash function does at reducing the hash collisions.\n> \n> David\n\nLooks much better! Simpler is almost always better.\n\nMinor remarks:\n\nComment for SH_ENTRY_INITIALIZER e. May be like:\n- SH_ENTRY_INITIALIZER(a) - if defined, this macro is called for new \nentries\n before key or hash is stored in. For example, it can be used to make\n necessary memory allocations.\n\n`pg_rotate_left32(x, 1) == pg_rotate_right(x, 31)`, therefore there's\nno need to add `pg_rotate_left32` at all. More over, for hash combining\nthere's no much difference between `pg_rotate_left32(x, 1)` and\n`pg_rotate_right(x, 1)`. (To be honestly, there is a bit of difference\ndue to murmur construction, but it should not be very big.)\n\nIf your test so sensitive to hash function speed, then I'd suggest\nto try something even simpler:\n\nstatic inline uint32\nrelfilenodebackend_hash(RelFileNodeBackend *rnode)\n{\n\tuint32\t\th = 0;\n#define step(x) h ^= (uint32)(x) * 0x85ebca6b; h = pg_rotate_right(h, \n11); h *= 9;\n\tstep(rnode->node.relNode);\n\tstep(rnode->node.spcNode); // spcNode could be different for same \nrelNode only\n // during table movement. Does it pay \nto hash it?\n\tstep(rnode->node.dbNode);\n\tstep(rnode->backend); // does it matter to hash backend?\n // It equals to InvalidBackendId for \nnon-temporary relations\n // and temporary relations in same \ndatabase never have same\n // relNode (have they?).\n\treturn murmurhash32(hashkey);\n}\n\nI'd like to see benchmark code. It quite interesting this place became\nmeasurable at all.\n\nregards,\nYura Sokolov.", "msg_date": "Sun, 25 Apr 2021 20:03:11 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "On Mon, 26 Apr 2021 at 05:03, Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> If your test so sensitive to hash function speed, then I'd suggest\n> to try something even simpler:\n>\n> static inline uint32\n> relfilenodebackend_hash(RelFileNodeBackend *rnode)\n> {\n> uint32 h = 0;\n> #define step(x) h ^= (uint32)(x) * 0x85ebca6b; h = pg_rotate_right(h,\n> 11); h *= 9;\n> step(rnode->node.relNode);\n> step(rnode->node.spcNode); // spcNode could be different for same\n> relNode only\n> // during table movement. Does it pay\n> to hash it?\n> step(rnode->node.dbNode);\n> step(rnode->backend); // does it matter to hash backend?\n> // It equals to InvalidBackendId for\n> non-temporary relations\n> // and temporary relations in same\n> database never have same\n> // relNode (have they?).\n> return murmurhash32(hashkey);\n> }\n\nI tried that and it got a median result of 113.795 seconds over 14\nruns with this recovery benchmark test.\n\nLOG: size: 4096, members: 2032, filled: 0.496094, total chain: 1014,\nmax chain: 6, avg chain: 0.499016, total_collisions: 428,\nmax_collisions: 3, avg_collisions: 0.210630\n\nI also tried the following hash function just to see how much\nperformance might be left from speeding it up:\n\nstatic inline uint32\nrelfilenodebackend_hash(RelFileNodeBackend *rnode)\n{\nuint32 h;\n\nh = pg_rotate_right32((uint32) rnode->node.relNode, 16) ^ ((uint32)\nrnode->node.dbNode);\nreturn murmurhash32(h);\n}\n\nI got a median of 112.685 seconds over 14 runs with:\n\nLOG: size: 4096, members: 2032, filled: 0.496094, total chain: 1044,\nmax chain: 7, avg chain: 0.513780, total_collisions: 438,\nmax_collisions: 3, avg_collisions: 0.215551\n\nSo it looks like there might not be too much left given that v2 was\n113.375 seconds (median over 10 runs)\n\n> I'd like to see benchmark code. It quite interesting this place became\n> measurable at all.\n\nSure.\n\n$ cat recoverybench_insert_hash.sh\n#!/bin/bash\n\npg_ctl stop -D pgdata -m smart\npg_ctl start -D pgdata -l pg.log -w\npsql -f setup1.sql postgres > /dev/null\npsql -c \"create table log_wal (lsn pg_lsn not null);\" postgres > /dev/null\npsql -c \"insert into log_wal values(pg_current_wal_lsn());\" postgres > /dev/null\npsql -c \"insert into hp select x,0 from generate_series(1,100000000)\nx;\" postgres > /dev/null\npsql -c \"insert into log_wal values(pg_current_wal_lsn());\" postgres > /dev/null\npsql -c \"select 'Used ' ||\npg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), lsn)) || ' of\nWAL' from log_wal limit 1;\" postgres\npg_ctl stop -D pgdata -m immediate -w\necho Starting Postgres...\npg_ctl start -D pgdata -l pg.log\n\n$ cat setup1.sql\ndrop table if exists hp;\ncreate table hp (a int primary key, b int not null) partition by hash(a);\nselect 'create table hp'||x|| ' partition of hp for values with\n(modulus 1000, remainder '||x||');' from generate_Series(0,999) x;\n\\gexec\n\nconfig:\nshared_buffers = 10GB\ncheckpoint_timeout = 60min\nmax_wal_size = 20GB\nmin_wal_size = 20GB\n\nFor subsequent runs, if you apply the patch that does the PANIC at the\nend of recovery, you'll just need to start the database up again to\nperform recovery again. You can then just tail -f on your postgres\nlogs to watch for the \"redo done\" message which will show you the time\nspent doing recovery.\n\nDavid.\n\n\n", "msg_date": "Mon, 26 Apr 2021 18:43:48 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "Hi,\n\nOn 2021-04-25 01:27:24 +0300, Yura Sokolov wrote:\n> It is quite interesting result. Simplehash being open-addressing with\n> linear probing is friendly for cpu cache. I'd recommend to define\n> SH_FILLFACTOR with value lower than default (0.9). I believe 0.75 is\n> suitable most for such kind of hash table.\n\nIt's not a \"plain\" linear probing hash table (although it is on the lookup\nside). During insertions collisions are reordered so that the average distance\nfrom the \"optimal\" position is ~even (\"robin hood hashing\"). That allows a\nhigher load factor than a plain linear probed hash table (for which IIRC\nthere's data to show 0.75 to be a good default load factor).\n\nThere of course may still be a benefit in lowering the load factor, but I'd\nnot start there.\n\nDavid's test aren't really suited to benchmarking the load factor, but to me\nthe stats he showed didn't highlight a need to lower the load factor. Lowering\nthe fill factor does influence the cache hit ratio...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 26 Apr 2021 11:46:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "Andres Freund писал 2021-04-26 21:46:\n> Hi,\n> \n> On 2021-04-25 01:27:24 +0300, Yura Sokolov wrote:\n>> It is quite interesting result. Simplehash being open-addressing with\n>> linear probing is friendly for cpu cache. I'd recommend to define\n>> SH_FILLFACTOR with value lower than default (0.9). I believe 0.75 is\n>> suitable most for such kind of hash table.\n> \n> It's not a \"plain\" linear probing hash table (although it is on the \n> lookup\n> side). During insertions collisions are reordered so that the average \n> distance\n> from the \"optimal\" position is ~even (\"robin hood hashing\"). That \n> allows a\n> higher load factor than a plain linear probed hash table (for which \n> IIRC\n> there's data to show 0.75 to be a good default load factor).\n\nEven for Robin Hood hashing 0.9 fill factor is too high. It leads to too\nmuch movements on insertion/deletion and longer average collision chain.\n\nNote that Robin Hood doesn't optimize average case. Indeed, usually \nRobin Hood\nhas worse (longer) average collision chain than simple linear probing.\nRobin Hood hashing optimizes worst case, ie it guarantees there is no \nunnecessary\nlong collision chain.\n\nSee discussion on Rust hash table fill factor when it were Robin Hood:\nhttps://github.com/rust-lang/rust/issues/38003\n\n> \n> There of course may still be a benefit in lowering the load factor, but \n> I'd\n> not start there.\n> \n> David's test aren't really suited to benchmarking the load factor, but \n> to me\n> the stats he showed didn't highlight a need to lower the load factor. \n> Lowering\n> the fill factor does influence the cache hit ratio...\n> \n> Greetings,\n> \n> Andres Freund\n\nregards,\nYura.\n\n\n", "msg_date": "Mon, 26 Apr 2021 22:44:13 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "Hi,\n\nOn 2021-04-26 22:44:13 +0300, Yura Sokolov wrote:\n> Even for Robin Hood hashing 0.9 fill factor is too high. It leads to too\n> much movements on insertion/deletion and longer average collision chain.\n\nThat's true for modification heavy cases - but most hash tables in PG,\nincluding the smgr one, are quite read heavy. For workloads where\nthere's a lot of smgr activity, the other overheads in relation\ncreation/drop handling are magnitudes more expensive than the collision\nhandling.\n\nNote that simplehash.h also grows when the distance gets too big and\nwhen there are too many elements to move, not just based on the fill\nfactor.\n\n\nI kinda wish we had a chained hashtable implementation with the same\ninterface as simplehash. It's very use-case dependent which approach is\nbetter, and right now we might be forcing some users to choose linear\nprobing because simplehash.h is still faster than dynahash, even though\nchaining would actually be more appropriate.\n\n\n> Note that Robin Hood doesn't optimize average case.\n\nRight.\n\n\n> See discussion on Rust hash table fill factor when it were Robin Hood:\n> https://github.com/rust-lang/rust/issues/38003\n\nThe first sentence actually confirms my point above, about it being a\nquestion of read vs write heavy.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 26 Apr 2021 12:58:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "David Rowley писал 2021-04-26 09:43:\n> I tried that and it got a median result of 113.795 seconds over 14\n> runs with this recovery benchmark test.\n> \n> LOG: size: 4096, members: 2032, filled: 0.496094, total chain: 1014,\n> max chain: 6, avg chain: 0.499016, total_collisions: 428,\n> max_collisions: 3, avg_collisions: 0.210630\n> \n> I also tried the following hash function just to see how much\n> performance might be left from speeding it up:\n> \n> static inline uint32\n> relfilenodebackend_hash(RelFileNodeBackend *rnode)\n> {\n> uint32 h;\n> \n> h = pg_rotate_right32((uint32) rnode->node.relNode, 16) ^ ((uint32)\n> rnode->node.dbNode);\n> return murmurhash32(h);\n> }\n> \n> I got a median of 112.685 seconds over 14 runs with:\n> \n> LOG: size: 4096, members: 2032, filled: 0.496094, total chain: 1044,\n> max chain: 7, avg chain: 0.513780, total_collisions: 438,\n> max_collisions: 3, avg_collisions: 0.215551\n\nThe best result is with just:\n\n return (uint32)rnode->node.relNode;\n\nie, relNode could be taken without mixing at all.\nrelNode is unique inside single database, and almost unique among whole \ncluster\nsince it is Oid.\n\n>> I'd like to see benchmark code. It quite interesting this place became\n>> measurable at all.\n> \n> Sure.\n> \n> $ cat recoverybench_insert_hash.sh\n> ....\n> \n> David.\n\nSo, I've repeated benchmark with different number of partitons (I tried\nto catch higher fillfactor) and less amount of inserted data (since I \ndon't\nwant to stress my SSD).\n\npartitions/ | dynahash | dynahash + | simplehash | simplehash + |\nfillfactor | | simple func | | simple func |\n------------+----------+-------------+--------------+\n 3500/0.43 | 3.73s | 3.54s | 3.58s | 3.34s |\n 3200/0.78 | 3.64s | 3.46s | 3.47s | 3.25s |\n 1500/0.74 | 3.18s | 2.97s | 3.03s | 2.79s |\n\nFillfactor is effective fillfactor in simplehash with than number of \npartitions.\nI wasn't able to measure with fillfactor close to 0.9 since looks like\nsimplehash tends to grow much earlier due to SH_GROW_MAX_MOVE.\n\nSimple function is hash function that returns only rnode->node.relNode.\nI've test it both with simplehash and dynahash.\nFor dynahash also custom match function were made.\n\nConclusion:\n- trivial hash function gives better results for both simplehash and \ndynahash,\n- simplehash improves performance for both complex and trivial hash \nfunction,\n- simplehash + trivial function perform best.\n\nI'd like to hear other's people comments on trivial hash function. But \nsince\ngeneration of relation's Oid are not subject of human interventions, I'd \nrecommend\nto stick with trivial.\n\nSince patch is simple, harmless and gives measurable improvement,\nI think it is ready for commit fest.\n\nregards,\nYura Sokolov.\nPostgres Proffesional https://www.postgrespro.com\n\nPS. David, please send patch once again since my mail client reattached \nfiles in\nprevious messages, and commit fest robot could think I'm author.\n\n\n", "msg_date": "Wed, 28 Apr 2021 15:28:57 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "On Thu, 29 Apr 2021 at 00:28, Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> The best result is with just:\n>\n> return (uint32)rnode->node.relNode;\n>\n> ie, relNode could be taken without mixing at all.\n> relNode is unique inside single database, and almost unique among whole\n> cluster\n> since it is Oid.\n\nI admit to having tried that too just to almost eliminate the cost of\nhashing. I just didn't consider it something we'd actually do.\n\nThe system catalogues are quite likely to all have the same\nrelfilenode in all databases, so for workloads that have a large\nnumber of databases, looking up WAL records that touch the catalogues\nis going to be pretty terrible.\n\nThe simplified hash function I wrote with just the relNode and dbNode\nwould be the least I'd be willing to entertain. However, I just\nwouldn't be surprised if there was a good reason for that being bad\ntoo.\n\n\n> So, I've repeated benchmark with different number of partitons (I tried\n> to catch higher fillfactor) and less amount of inserted data (since I\n> don't\n> want to stress my SSD).\n>\n> partitions/ | dynahash | dynahash + | simplehash | simplehash + |\n> fillfactor | | simple func | | simple func |\n> ------------+----------+-------------+--------------+\n> 3500/0.43 | 3.73s | 3.54s | 3.58s | 3.34s |\n> 3200/0.78 | 3.64s | 3.46s | 3.47s | 3.25s |\n> 1500/0.74 | 3.18s | 2.97s | 3.03s | 2.79s |\n>\n> Fillfactor is effective fillfactor in simplehash with than number of\n> partitions.\n> I wasn't able to measure with fillfactor close to 0.9 since looks like\n> simplehash tends to grow much earlier due to SH_GROW_MAX_MOVE.\n\nThanks for testing that.\n\nI also ran some tests last night to test the 0.75 vs 0.9 fillfactor to\nsee if it made a difference. The test was similar to last time, but I\ntrimmed the number of rows inserted from 100 million down to 25\nmillion. Last time I tested with 1000 partitions, this time with each\nof: 100 200 300 400 500 600 700 800 900 1000 partitions. There didn't\nseem to be any point of testing lower than that as the minimum hash\ntable size is 512.\n\nThe averages over 10 runs were:\n\nnparts ff75 ff90\n100 21.898 22.226\n200 23.105 25.493\n300 25.274 24.251\n400 25.139 25.611\n500 25.738 25.454\n600 26.656 26.82\n700 27.577 27.102\n800 27.608 27.546\n900 27.284 28.186\n1000 29 28.153\n\nOr to summarise a bit, we could just look at the sum of all the\nresults per fillfactor:\n\nsum ff75 2592.79\nsum ff90 2608.42 100.6%\n\nfillfactor 75 did come out slightly faster, but only just. It seems\nclose enough that it might be better just to keep the 90 to save a\nlittle memory and improve caching elsewhere. Also, from below, you\ncan see that for 300, 500, 600, 700, 1000 tables tests, the hash\ntables ended up the same size, yet there's a bit of variability in the\ntiming result. So the 0.6% gain might just be noise.\n\nI don't think it's worth making the fillfactor 75.\n\ndrowley@amd3990x:~/recoverylogs$ grep -rH -m 1 \"collisions\"\nff75_tb100.log:LOG: size: 1024, members: 231, filled: 0.225586, total\nchain: 33, max chain: 2, avg chain: 0.142857, total_collisions: 20,\nmax_collisions: 2, avg_collisions: 0.086580\nff90_tb100.log:LOG: size: 512, members: 231, filled: 0.451172, total\nchain: 66, max chain: 2, avg chain: 0.285714, total_collisions: 36,\nmax_collisions: 2, avg_collisions: 0.155844\n\nff75_tb200.log:LOG: size: 1024, members: 431, filled: 0.420898, total\nchain: 160, max chain: 4, avg chain: 0.371230, total_collisions: 81,\nmax_collisions: 3, avg_collisions: 0.187935\nff90_tb200.log:LOG: size: 512, members: 431, filled: 0.841797, total\nchain: 942, max chain: 9, avg chain: 2.185615, total_collisions: 134,\nmax_collisions: 3, avg_collisions: 0.310905\n\nff90_tb300.log:LOG: size: 1024, members: 631, filled: 0.616211, total\nchain: 568, max chain: 9, avg chain: 0.900158, total_collisions: 158,\nmax_collisions: 4, avg_collisions: 0.250396\nff75_tb300.log:LOG: size: 1024, members: 631, filled: 0.616211, total\nchain: 568, max chain: 9, avg chain: 0.900158, total_collisions: 158,\nmax_collisions: 4, avg_collisions: 0.250396\n\nff75_tb400.log:LOG: size: 2048, members: 831, filled: 0.405762, total\nchain: 341, max chain: 4, avg chain: 0.410349, total_collisions: 162,\nmax_collisions: 3, avg_collisions: 0.194946\nff90_tb400.log:LOG: size: 1024, members: 831, filled: 0.811523, total\nchain: 1747, max chain: 15, avg chain: 2.102286, total_collisions:\n269, max_collisions: 3, avg_collisions: 0.323706\n\nff75_tb500.log:LOG: size: 2048, members: 1031, filled: 0.503418,\ntotal chain: 568, max chain: 5, avg chain: 0.550921, total_collisions:\n219, max_collisions: 4, avg_collisions: 0.212415\nff90_tb500.log:LOG: size: 2048, members: 1031, filled: 0.503418,\ntotal chain: 568, max chain: 5, avg chain: 0.550921, total_collisions:\n219, max_collisions: 4, avg_collisions: 0.212415\n\nff75_tb600.log:LOG: size: 2048, members: 1231, filled: 0.601074,\ntotal chain: 928, max chain: 7, avg chain: 0.753859, total_collisions:\n298, max_collisions: 4, avg_collisions: 0.242080\nff90_tb600.log:LOG: size: 2048, members: 1231, filled: 0.601074,\ntotal chain: 928, max chain: 7, avg chain: 0.753859, total_collisions:\n298, max_collisions: 4, avg_collisions: 0.242080\n\nff75_tb700.log:LOG: size: 2048, members: 1431, filled: 0.698730,\ntotal chain: 1589, max chain: 9, avg chain: 1.110412,\ntotal_collisions: 391, max_collisions: 4, avg_collisions: 0.273235\nff90_tb700.log:LOG: size: 2048, members: 1431, filled: 0.698730,\ntotal chain: 1589, max chain: 9, avg chain: 1.110412,\ntotal_collisions: 391, max_collisions: 4, avg_collisions: 0.273235\n\nff75_tb800.log:LOG: size: 4096, members: 1631, filled: 0.398193,\ntotal chain: 628, max chain: 6, avg chain: 0.385040, total_collisions:\n296, max_collisions: 3, avg_collisions: 0.181484\nff90_tb800.log:LOG: size: 2048, members: 1631, filled: 0.796387,\ntotal chain: 2903, max chain: 12, avg chain: 1.779890,\ntotal_collisions: 515, max_collisions: 3, avg_collisions: 0.315757\n\nff75_tb900.log:LOG: size: 4096, members: 1831, filled: 0.447021,\ntotal chain: 731, max chain: 5, avg chain: 0.399235, total_collisions:\n344, max_collisions: 3, avg_collisions: 0.187875\nff90_tb900.log:LOG: size: 2048, members: 1831, filled: 0.894043,\ntotal chain: 6364, max chain: 14, avg chain: 3.475696,\ntotal_collisions: 618, max_collisions: 4, avg_collisions: 0.337520\n\nff75_tb1000.log:LOG: size: 4096, members: 2031, filled: 0.495850,\ntotal chain: 1024, max chain: 6, avg chain: 0.504185,\ntotal_collisions: 416, max_collisions: 3, avg_collisions: 0.204825\nff90_tb1000.log:LOG: size: 4096, members: 2031, filled: 0.495850,\ntotal chain: 1024, max chain: 6, avg chain: 0.504185,\ntotal_collisions: 416, max_collisions: 3, avg_collisions: 0.204825\n\n\nAnother line of thought for making it go faster would be to do\nsomething like get rid of the hash status field from SMgrEntry. That\ncould be either coded into a single bit we'd borrow from the hash\nvalue, or it could be coded into the least significant bit of the data\nfield. A pointer to palloc'd memory should always be MAXALIGNed,\nwhich means at least the lower two bits are always zero. We'd just\nneed to make sure and do something like \"data & ~((uintptr_t) 3)\" to\ntrim off the hash status bits before dereferencing the pointer. That\nwould make the SMgrEntry 12 bytes on a 64-bit machine. However, it\nwould also mean that some entries would span 2 cache lines, which\nmight affect performance a bit.\n\n> PS. David, please send patch once again since my mail client reattached\n> files in\n> previous messages, and commit fest robot could think I'm author.\n\nAuthors are listed manually in the CF app. The app will pickup .patch\nfiles from the latest email in the thread and the CF bot will test\nthose. So it does pay to be pretty careful when attaching patches to\nthreads that are in the CF app. That's the reason I added the .txt\nextension to the recovery panic patch. The CF bot machines would have\ncomplained about that.\n\n\n", "msg_date": "Thu, 29 Apr 2021 11:51:07 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "David Rowley писал 2021-04-29 02:51:\n> On Thu, 29 Apr 2021 at 00:28, Yura Sokolov <y.sokolov@postgrespro.ru> \n> wrote:\n>> The best result is with just:\n>> \n>> return (uint32)rnode->node.relNode;\n>> \n>> ie, relNode could be taken without mixing at all.\n>> relNode is unique inside single database, and almost unique among \n>> whole\n>> cluster\n>> since it is Oid.\n> \n> I admit to having tried that too just to almost eliminate the cost of\n> hashing. I just didn't consider it something we'd actually do.\n> \n> The system catalogues are quite likely to all have the same\n> relfilenode in all databases, so for workloads that have a large\n> number of databases, looking up WAL records that touch the catalogues\n> is going to be pretty terrible.\n\nSingle workload that could touch system catalogues in different\ndatabases is recovery (and autovacuum?). Client backends couldn't\nbe connected to more than one database.\n\nBut netherless, you're right. I oversimplified it intentionally.\nI wrote originally:\n\n hashcode = (uint32)rnode->node.dbNode * 0xc2b2ae35;\n hashcode ^= (uint32)rnode->node.relNode;\n return hashcode;\n\nBut before sending mail I'd cut dbNode part.\nStill, main point: relNode could be put unmixed into final value.\nThis way less collisions happen.\n\n> \n> The simplified hash function I wrote with just the relNode and dbNode\n> would be the least I'd be willing to entertain. However, I just\n> wouldn't be surprised if there was a good reason for that being bad\n> too.\n> \n> \n>> So, I've repeated benchmark with different number of partitons (I \n>> tried\n>> to catch higher fillfactor) and less amount of inserted data (since I\n>> don't\n>> want to stress my SSD).\n>> \n>> partitions/ | dynahash | dynahash + | simplehash | simplehash + |\n>> fillfactor | | simple func | | simple func |\n>> ------------+----------+-------------+--------------+\n>> 3500/0.43 | 3.73s | 3.54s | 3.58s | 3.34s |\n>> 3200/0.78 | 3.64s | 3.46s | 3.47s | 3.25s |\n>> 1500/0.74 | 3.18s | 2.97s | 3.03s | 2.79s |\n>> \n>> Fillfactor is effective fillfactor in simplehash with than number of\n>> partitions.\n>> I wasn't able to measure with fillfactor close to 0.9 since looks like\n>> simplehash tends to grow much earlier due to SH_GROW_MAX_MOVE.\n> \n> Thanks for testing that.\n> \n> I also ran some tests last night to test the 0.75 vs 0.9 fillfactor to\n> see if it made a difference. The test was similar to last time, but I\n> trimmed the number of rows inserted from 100 million down to 25\n> million. Last time I tested with 1000 partitions, this time with each\n> of: 100 200 300 400 500 600 700 800 900 1000 partitions. There didn't\n> seem to be any point of testing lower than that as the minimum hash\n> table size is 512.\n> \n> The averages over 10 runs were:\n> \n> nparts ff75 ff90\n> 100 21.898 22.226\n> 200 23.105 25.493\n> 300 25.274 24.251\n> 400 25.139 25.611\n> 500 25.738 25.454\n> 600 26.656 26.82\n> 700 27.577 27.102\n> 800 27.608 27.546\n> 900 27.284 28.186\n> 1000 29 28.153\n> \n> Or to summarise a bit, we could just look at the sum of all the\n> results per fillfactor:\n> \n> sum ff75 2592.79\n> sum ff90 2608.42 100.6%\n> \n> fillfactor 75 did come out slightly faster, but only just. It seems\n> close enough that it might be better just to keep the 90 to save a\n> little memory and improve caching elsewhere. Also, from below, you\n> can see that for 300, 500, 600, 700, 1000 tables tests, the hash\n> tables ended up the same size, yet there's a bit of variability in the\n> timing result. So the 0.6% gain might just be noise.\n> \n> I don't think it's worth making the fillfactor 75.\n\nTo be clear: I didn't change SH_FILLFACTOR. It were equal to 0.9 .\nI just were not able to catch table with fill factor more than 0.78.\nLooks like you've got it with 900 partitions :-)\n\n> \n> Another line of thought for making it go faster would be to do\n> something like get rid of the hash status field from SMgrEntry. That\n> could be either coded into a single bit we'd borrow from the hash\n> value, or it could be coded into the least significant bit of the data\n> field. A pointer to palloc'd memory should always be MAXALIGNed,\n> which means at least the lower two bits are always zero. We'd just\n> need to make sure and do something like \"data & ~((uintptr_t) 3)\" to\n> trim off the hash status bits before dereferencing the pointer. That\n> would make the SMgrEntry 12 bytes on a 64-bit machine. However, it\n> would also mean that some entries would span 2 cache lines, which\n> might affect performance a bit.\n\nThen data pointer will be itself unaligned to 8 bytes. While x86 is\nmostly indifferent to this, doubtfully this memory economy will pay\noff.\n\nregards,\nYura Sokolov.\n\n\n", "msg_date": "Thu, 29 Apr 2021 03:30:47 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "On Thu, 29 Apr 2021 at 12:30, Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n>\n> David Rowley писал 2021-04-29 02:51:\n> > Another line of thought for making it go faster would be to do\n> > something like get rid of the hash status field from SMgrEntry. That\n> > could be either coded into a single bit we'd borrow from the hash\n> > value, or it could be coded into the least significant bit of the data\n> > field. A pointer to palloc'd memory should always be MAXALIGNed,\n> > which means at least the lower two bits are always zero. We'd just\n> > need to make sure and do something like \"data & ~((uintptr_t) 3)\" to\n> > trim off the hash status bits before dereferencing the pointer. That\n> > would make the SMgrEntry 12 bytes on a 64-bit machine. However, it\n> > would also mean that some entries would span 2 cache lines, which\n> > might affect performance a bit.\n>\n> Then data pointer will be itself unaligned to 8 bytes. While x86 is\n> mostly indifferent to this, doubtfully this memory economy will pay\n> off.\n\nActually, I didn't think very hard about that. The struct would still\nbe 16 bytes and just have padding so the data pointer was aligned to 8\nbytes (assuming a 64-bit machine).\n\nDavid\n\n\n", "msg_date": "Thu, 29 Apr 2021 16:19:03 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "I've attached an updated patch. I forgot to call SH_ENTRY_CLEANUP,\nwhen it's defined during SH_RESET.\n\nI also tided up a couple of comments and change the code to use\npg_rotate_right32(.., 31) instead of adding a new function for\npg_rotate_left32 and calling that to shift left 1 bit.\n\nDavid", "msg_date": "Fri, 30 Apr 2021 15:38:10 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "Hi David,\n\nYou're probably aware of this, but just to make it explicit: Jakub\nWartak was testing performance of recovery, and one of the bottlenecks\nhe found in some of his cases was dynahash as used by SMgr. It seems\nquite possible that this work would benefit some of his test workloads.\nHe last posted about it here:\n\nhttps://postgr.es/m/VI1PR0701MB69608CBCE44D80857E59572EF6CA0@VI1PR0701MB6960.eurprd07.prod.outlook.com\n\nthough the fraction of dynahash-from-SMgr is not as high there as in\nsome of other reports I saw.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Fri, 30 Apr 2021 13:36:22 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "Hi David, Alvaro, -hackers \n\n> Hi David,\n> \n> You're probably aware of this, but just to make it explicit: Jakub Wartak was\n> testing performance of recovery, and one of the bottlenecks he found in\n> some of his cases was dynahash as used by SMgr. It seems quite possible\n> that this work would benefit some of his test workloads.\n\nI might be a little bit out of the loop, but as Alvaro stated - Thomas did plenty of excellent job related to recovery performance in that thread. In my humble opinion and if I'm not mistaken (I'm speculating here) it might be *not* how Smgr hash works, but how often it is being exercised and that would also explain relatively lower than expected(?) gains here. There are at least two very important emails from him that I'm aware that are touching the topic of reordering/compacting/batching calls to Smgr:\nhttps://www.postgresql.org/message-id/CA%2BhUKG%2B2Vw3UAVNJSfz5_zhRcHUWEBDrpB7pyQ85Yroep0AKbw%40mail.gmail.com\nhttps://www.postgresql.org/message-id/flat/CA%2BhUKGK4StQ%3DeXGZ-5hTdYCmSfJ37yzLp9yW9U5uH6526H%2BUeg%40mail.gmail.com\n\nAnother potential option that we've discussed is that the redo generation itself is likely a brake of efficient recovery performance today (e.g. INSERT-SELECT on table with indexes, generates interleaved WAL records that touch often limited set of blocks that usually put Smgr into spotlight).\n\n-Jakub Wartak.\n\n\n", "msg_date": "Wed, 5 May 2021 08:16:32 +0000", "msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>", "msg_from_op": false, "msg_subject": "RE: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "Hi Jakub,\n\nOn Wed, 5 May 2021 at 20:16, Jakub Wartak <Jakub.Wartak@tomtom.com> wrote:\n> I might be a little bit out of the loop, but as Alvaro stated - Thomas did plenty of excellent job related to recovery performance in that thread. In my humble opinion and if I'm not mistaken (I'm speculating here) it might be *not* how Smgr hash works, but how often it is being exercised and that would also explain relatively lower than expected(?) gains here. There are at least two very important emails from him that I'm aware that are touching the topic of reordering/compacting/batching calls to Smgr:\n> https://www.postgresql.org/message-id/CA%2BhUKG%2B2Vw3UAVNJSfz5_zhRcHUWEBDrpB7pyQ85Yroep0AKbw%40mail.gmail.com\n> https://www.postgresql.org/message-id/flat/CA%2BhUKGK4StQ%3DeXGZ-5hTdYCmSfJ37yzLp9yW9U5uH6526H%2BUeg%40mail.gmail.com\n\nI'm not much of an expert here and I didn't follow the recovery\nprefetching stuff closely. So, with that in mind, I think there are\nlots that could be done along the lines of what Thomas is mentioning.\nBatching WAL records up by filenode then replaying each filenode one\nby one when our batching buffer is full. There could be some sort of\nparallel options there too, where workers replay a filenode each.\nHowever, that wouldn't really work for recovery on a hot-standby\nthough. We'd need to ensure we replay the commit record for each\ntransaction last. I think you'd have to batch by filenode and\ntransaction in that case. Each batch might be pretty small on a\ntypical OLTP workload, so it might not help much there, or it might\nhinder.\n\nBut having said that, I don't think any of those possibilities should\nstop us speeding up smgropen().\n\n> Another potential option that we've discussed is that the redo generation itself is likely a brake of efficient recovery performance today (e.g. INSERT-SELECT on table with indexes, generates interleaved WAL records that touch often limited set of blocks that usually put Smgr into spotlight).\n\nI'm not quite sure if I understand what you mean here. Is this\nqueuing up WAL records up during transactions and flush them out to\nWAL every so often after rearranging them into an order that's more\noptimal for replay?\n\nDavid\n\n\n", "msg_date": "Thu, 6 May 2021 00:32:00 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "Hey David,\r\n\r\n> I think you'd have to batch by filenode and transaction in that case. Each batch might be pretty small on a typical OLTP workload, so it might not help much there, or it might hinder.\r\n\r\nTrue, it is very workload dependent (I was chasing mainly INSERTs multiValues, INSERT-SELECT) that often hit the same $block, certainly not OLTP. I would even say that INSERT-as-SELECT would be more suited for DWH-like processing.\r\n\r\n> But having said that, I don't think any of those possibilities should stop us speeding up smgropen().\r\n\r\nOf course! I've tried a couple of much more smaller ideas, but without any big gains. I was able to squeeze like 300-400k function calls per second (WAL records/s), that was the point I think where I think smgropen() got abused. \r\n\r\n> > Another potential option that we've discussed is that the redo generation\r\n> itself is likely a brake of efficient recovery performance today (e.g. INSERT-\r\n> SELECT on table with indexes, generates interleaved WAL records that touch\r\n> often limited set of blocks that usually put Smgr into spotlight).\r\n> \r\n> I'm not quite sure if I understand what you mean here. Is this queuing up\r\n> WAL records up during transactions and flush them out to WAL every so\r\n> often after rearranging them into an order that's more optimal for replay?\r\n\r\nWhy not both? 😉 We were very concentrated on standby side, but on primary side one could also change how WAL records are generated:\r\n\r\n1) Minimalization of records towards same repeated $block eg. Heap2 table_multi_insert() API already does this and it matters to generate more optimal stream for replay:\r\n\r\npostgres@test=# create table t (id bigint primary key);\r\npostgres@test=# insert into t select generate_series(1, 10);\r\n\r\nresults in many calls due to interleave heap with btree records for the same block from Smgr perspective (this is especially visible on highly indexed tables) =>\t\r\nrmgr: Btree len (rec/tot): 64/ 64, tx: 17243284, lsn: 4/E7000108, prev 4/E70000A0, desc: INSERT_LEAF off 1, blkref #0: rel 1663/16384/32794 blk 1\r\nrmgr: Heap len (rec/tot): 63/ 63, tx: 17243284, lsn: 4/E7000148, prev 4/E7000108, desc: INSERT off 2 flags 0x00, blkref #0: rel 1663/16384/32791 blk 0\r\nrmgr: Btree len (rec/tot): 64/ 64, tx: 17243284, lsn: 4/E7000188, prev 4/E7000148, desc: INSERT_LEAF off 2, blkref #0: rel 1663/16384/32794 blk 1\r\nrmgr: Heap len (rec/tot): 63/ 63, tx: 17243284, lsn: 4/E70001C8, prev 4/E7000188, desc: INSERT off 3 flags 0x00, blkref #0: rel 1663/16384/32791 blk 0\r\nrmgr: Btree len (rec/tot): 64/ 64, tx: 17243284, lsn: 4/E7000208, prev 4/E70001C8, desc: INSERT_LEAF off 3, blkref #0: rel 1663/16384/32794 blk 1\r\nrmgr: Heap len (rec/tot): 63/ 63, tx: 17243284, lsn: 4/E7000248, prev 4/E7000208, desc: INSERT off 4 flags 0x00, blkref #0: rel 1663/16384/32791 blk 0\r\nrmgr: Btree len (rec/tot): 64/ 64, tx: 17243284, lsn: 4/E7000288, prev 4/E7000248, desc: INSERT_LEAF off 4, blkref #0: rel 1663/16384/32794 blk 1\r\nrmgr: Heap len (rec/tot): 63/ 63, tx: 17243284, lsn: 4/E70002C8, prev 4/E7000288, desc: INSERT off 5 flags 0x00, blkref #0: rel 1663/16384/32791 blk 0\r\n[..]\r\nSimilar stuff happens for UPDATE. It basically prevents recent-buffer optimization that avoid repeated calls to smgropen().\r\n\r\nAnd here's already existing table_multi_inserts v2 API (Heap2) sample with obvious elimination of unnecessary individual calls to smgopen() via one big MULTI_INSERT instead (for CTAS/COPY/REFRESH MV) :\r\npostgres@test=# create table t (id bigint primary key);\r\npostgres@test=# copy (select generate_series (1, 10)) to '/tmp/t';\r\npostgres@test=# copy t from '/tmp/t';\r\n=>\r\nrmgr: Heap2 len (rec/tot): 210/ 210, tx: 17243290, lsn: 4/E9000028, prev 4/E8004410, desc: MULTI_INSERT+INIT 10 tuples flags 0x02, blkref #0: rel 1663/16384/32801 blk 0\r\nrmgr: Btree len (rec/tot): 102/ 102, tx: 17243290, lsn: 4/E9000100, prev 4/E9000028, desc: NEWROOT lev 0, blkref #0: rel 1663/16384/32804 blk 1, blkref #2: rel 1663/16384/32804 blk 0\r\nrmgr: Btree len (rec/tot): 64/ 64, tx: 17243290, lsn: 4/E9000168, prev 4/E9000100, desc: INSERT_LEAF off 1, blkref #0: rel 1663/16384/32804 blk 1\r\nrmgr: Btree len (rec/tot): 64/ 64, tx: 17243290, lsn: 4/E90001A8, prev 4/E9000168, desc: INSERT_LEAF off 2, blkref #0: rel 1663/16384/32804 blk 1\r\nrmgr: Btree len (rec/tot): 64/ 64, tx: 17243290, lsn: 4/E90001E8, prev 4/E90001A8, desc: INSERT_LEAF off 3, blkref #0: rel 1663/16384/32804 blk 1\r\n[..]\r\nHere Btree it is very localized (at least when concurrent sessions are not generating WAL) and it enables Thomas's recent-buffer to kick in\r\n\r\nDELETE is much more simple (thanks to not chewing out those Btree records) and also thanks to Thomas's recent-buffer should theoretically put much less stress on smgropen() already:\r\nrmgr: Heap len (rec/tot): 54/ 54, tx: 17243296, lsn: 4/ED000028, prev 4/EC002800, desc: DELETE off 1 flags 0x00 KEYS_UPDATED , blkref #0: rel 1663/16384/32808 blk 0\r\nrmgr: Heap len (rec/tot): 54/ 54, tx: 17243296, lsn: 4/ED000060, prev 4/ED000028, desc: DELETE off 2 flags 0x00 KEYS_UPDATED , blkref #0: rel 1663/16384/32808 blk 0\r\nrmgr: Heap len (rec/tot): 54/ 54, tx: 17243296, lsn: 4/ED000098, prev 4/ED000060, desc: DELETE off 3 flags 0x00 KEYS_UPDATED , blkref #0: rel 1663/16384/32808 blk 0\r\nrmgr: Heap len (rec/tot): 54/ 54, tx: 17243296, lsn: 4/ED0000D0, prev 4/ED000098, desc: DELETE off 4 flags 0x00 KEYS_UPDATED , blkref #0: rel 1663/16384/32808 blk 0\r\n[..]\r\n\r\n2) So what's missing - I may be wrong on this one - something like \"index_multi_inserts\" Btree2 API to avoid repeatedly overwhelming smgropen() on recovery side for same index's $buffer. Not sure it is worth the effort, though especially recent-buffer fixes that:\r\nrmgr: Btree len (rec/tot): 64/ 64, tx: 17243290, lsn: 4/E9000168, prev 4/E9000100, desc: INSERT_LEAF off 1, blkref #0: rel 1663/16384/32804 blk 1\r\nrmgr: Btree len (rec/tot): 64/ 64, tx: 17243290, lsn: 4/E90001A8, prev 4/E9000168, desc: INSERT_LEAF off 2, blkref #0: rel 1663/16384/32804 blk 1\r\nrmgr: Btree len (rec/tot): 64/ 64, tx: 17243290, lsn: 4/E90001E8, prev 4/E90001A8, desc: INSERT_LEAF off 3, blkref #0: rel 1663/16384/32804 blk 1\r\nright?\r\n\r\n3) Concurrent DML sessions mixing WAL records: the buffering on backend's side of things (on private \"thread\" of WAL - in private memory - that would be simply \"copied\" into logwriter's main WAL buffer when committing/buffer full) - it would seem like an very interesting idea to limit interleaving concurrent sessions WAL records between each other and exploit the recent-buffer enhancement to avoid repeating the same calls to Smgr, wouldn't it? (I'm just mentioning it as I saw you were benchmarking it here and called out this idea).\r\n\r\nI could be wrong though with many of those simplifications, in any case please consult with Thomas as he knows much better and is much more trusted source than me 😉\r\n\r\n-J.\r\n\r\n", "msg_date": "Wed, 5 May 2021 14:05:43 +0000", "msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>", "msg_from_op": false, "msg_subject": "RE: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nI believe it is ready for committer.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Fri, 07 May 2021 13:18:40 +0000", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "I'd been thinking of this patch again. When testing with simplehash,\nI found that the width of the hash bucket type was fairly critical for\ngetting good performance from simplehash.h. With simplehash.h I\ndidn't manage to narrow this any more than 16 bytes. I needed to store\nthe 32-bit hash value and a pointer to the data. On a 64-bit machine,\nwith padding, that's 16-bytes. I've been thinking about a way to\nnarrow this down further to just 8 bytes and also solve the stable\npointer problem at the same time...\n\nI've come up with a new hash table implementation that I've called\ngenerichash. It works similarly to simplehash in regards to the\nlinear probing, only instead of storing the data in the hash bucket,\nwe just store a uint32 index that indexes off into an array. To keep\nthe pointers in that array stable, we cannot resize the array as the\ntable grows. Instead, I just allocate another array of the same size.\nSince these arrays are always sized as powers of 2, it's very fast to\nindex into them using the uint32 index that's stored in the bucket.\nUnused buckets just store the special index of 0xFFFFFFFF.\n\nI've also proposed to use this hash table implementation over in [1]\nto speed up LockReleaseAll(). The 0001 patch here is just the same as\nthe patch from [1].\n\nThe 0002 patch includes using a generichash hash table for SMgr.\n\nThe performance using generichash.h is about the same as the\nsimplehash.h version of the patch. Although, the test was not done on\nthe same version of master.\n\nMaster (97b713418)\ndrowley@amd3990x:~$ tail -f pg.log | grep \"redo done\"\nCPU: user: 124.85 s, system: 6.83 s, elapsed: 131.74 s\nCPU: user: 115.01 s, system: 4.76 s, elapsed: 119.83 s\nCPU: user: 122.13 s, system: 6.41 s, elapsed: 128.60 s\nCPU: user: 113.85 s, system: 6.11 s, elapsed: 120.02 s\nCPU: user: 121.40 s, system: 6.28 s, elapsed: 127.74 s\nCPU: user: 113.71 s, system: 5.80 s, elapsed: 119.57 s\nCPU: user: 113.96 s, system: 5.90 s, elapsed: 119.92 s\nCPU: user: 122.74 s, system: 6.21 s, elapsed: 129.01 s\nCPU: user: 122.00 s, system: 6.38 s, elapsed: 128.44 s\nCPU: user: 113.06 s, system: 6.14 s, elapsed: 119.25 s\nCPU: user: 114.42 s, system: 4.35 s, elapsed: 118.82 s\n\nMedian: 120.02 s\n\nmaster + v1 + v2\n\ndrowley@amd3990x:~$ tail -n 0 -f pg.log | grep \"redo done\"\nCPU: user: 107.75 s, system: 4.61 s, elapsed: 112.41 s\nCPU: user: 108.07 s, system: 4.49 s, elapsed: 112.61 s\nCPU: user: 106.89 s, system: 5.55 s, elapsed: 112.49 s\nCPU: user: 107.42 s, system: 5.64 s, elapsed: 113.12 s\nCPU: user: 106.85 s, system: 4.42 s, elapsed: 111.31 s\nCPU: user: 107.36 s, system: 4.76 s, elapsed: 112.16 s\nCPU: user: 107.20 s, system: 4.47 s, elapsed: 111.72 s\nCPU: user: 106.94 s, system: 5.89 s, elapsed: 112.88 s\nCPU: user: 115.32 s, system: 6.12 s, elapsed: 121.49 s\nCPU: user: 108.02 s, system: 4.48 s, elapsed: 112.54 s\nCPU: user: 106.93 s, system: 4.54 s, elapsed: 111.51 s\n\nMedian: 112.49 s\n\nSo about a 6.69% speedup\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvoKqWRxw5nnUPZ8+mAJKHPOPxYGoY1gQdh0WeS4+biVhg@mail.gmail.com", "msg_date": "Tue, 22 Jun 2021 02:15:26 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "On Mon, Jun 21, 2021 at 10:15 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> I've come up with a new hash table implementation that I've called\n> generichash.\n\nAt the risk of kibitzing the least-important detail of this proposal,\nI'm not very happy with the names of our hash implementations.\nsimplehash is not especially simple, and dynahash is not particularly\ndynamic, especially now that the main place we use it is for\nshared-memory hash tables that can't be resized. Likewise, generichash\ndoesn't really give any kind of clue about how this hash table is\ndifferent from any of the others. I don't know how possible it is to\ndo better here; naming things is one of the two hard problems in\ncomputer science. In a perfect world, though, our hash table\nimplementations would be named in such a way that somebody might be\nable to look at the names and guess on that basis which one is\nbest-suited to a given task.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Jun 2021 10:53:33 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Jun 21, 2021 at 10:15 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>> I've come up with a new hash table implementation that I've called\n>> generichash.\n\n> At the risk of kibitzing the least-important detail of this proposal,\n> I'm not very happy with the names of our hash implementations.\n\nI kind of wonder if we really need four different hash table\nimplementations (this being the third \"generic\" one, plus hash join\nhas its own, and I may have forgotten others). Should we instead\nthink about revising simplehash to gain the benefits of this patch?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Jun 2021 11:43:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "On Tue, 22 Jun 2021 at 02:53, Robert Haas <robertmhaas@gmail.com> wrote:\n> At the risk of kibitzing the least-important detail of this proposal,\n> I'm not very happy with the names of our hash implementations.\n> simplehash is not especially simple, and dynahash is not particularly\n> dynamic, especially now that the main place we use it is for\n> shared-memory hash tables that can't be resized. Likewise, generichash\n> doesn't really give any kind of clue about how this hash table is\n> different from any of the others. I don't know how possible it is to\n> do better here; naming things is one of the two hard problems in\n> computer science. In a perfect world, though, our hash table\n> implementations would be named in such a way that somebody might be\n> able to look at the names and guess on that basis which one is\n> best-suited to a given task.\n\nI'm certainly open to better names. I did almost call it stablehash,\nin regards to the pointers to elements not moving around like they do\nwith simplehash.\n\nI think more generally, hash table implementations are complex enough\nthat it's pretty much impossible to give them a short enough\nmeaningful name. Most papers just end up assigning a name to some\ntechnique. e.g Robinhood, Cuckoo etc.\n\nBoth simplehash and generichash use a variant of Robinhood hashing.\nsimplehash uses open addressing and generichash does not. Instead of\nAndres naming it simplehash, if he'd instead called it\n\"robinhoodhash\", then someone might come along and complain that his\nimplementation is broken because it does not implement tombstoning.\nMaybe Andres thought he'd avoid that by not claiming that it's an\nimplementation of a Robinhood hash table. That seems pretty wise to\nme. Naming it simplehash was a pretty simple way of avoiding that\nproblem.\n\nAnyway, I'm open to better names, but I don't think the name should\ndrive the implementation. If the implementation does not fit the name\nperfectly, then the name should change rather than the implementation.\n\nPersonally, I think we should call it RowleyHash, but I think others\nmight object. ;-)\n\nDavid\n\n\n", "msg_date": "Tue, 22 Jun 2021 13:38:33 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "On Tue, 22 Jun 2021 at 03:43, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I kind of wonder if we really need four different hash table\n> implementations (this being the third \"generic\" one, plus hash join\n> has its own, and I may have forgotten others). Should we instead\n> think about revising simplehash to gain the benefits of this patch?\n\nhmm, yeah. I definitely agree with trying to have as much reusable\ncode as we can when we can. It certainly reduces maintenance and bugs\ntend to be found more quickly too. It's a very worthy cause.\n\nI did happen to think of this when I was copying swathes of code out\nof simplehash.h. However, I decided that the two implementations are\nsufficiently different that if I tried to merge them both into one .h\nfile, we'd have some unreadable and unmaintainable mess. I just don't\nthink their DNA is compatible enough for the two to be mated\nsuccessfully. For example, simplehash uses open addressing and\ngenerichash does not. This means that things like iterating over the\ntable works completely differently. Lookups in generichash need to\nperform an extra step to fetch the actual data from the segment\narrays. I think it would certainly be possible to merge the two, but\nI just don't think it would be easy code to work on if we did that.\n\nThe good thing is that that the API between the two is very similar\nand it's quite easy to swap one for the other. I did make changes\naround memory allocation due to me being too cheap to zero memory when\nI didn't need to and simplehash not having any means of allocating\nmemory without zeroing it.\n\nI also think that there's just no one-size-fits-all hash table type.\nsimplehash will not perform well when the size of the stored element\nis very large. There's simply too much memcpying to move data around\nduring insert/delete. simplehash will also have terrible iteration\nperformance in sparsely populated tables. However, simplehash will be\npretty much unbeatable for lookups where the element type is very\nsmall, e.g single Datum, or an int. The CPU cache efficiency there\nwill be pretty much unbeatable.\n\nI tried to document the advantages of each in the file header\ncomments. I should probably also add something to simplehash.h's\ncomments to mention generichash.h\n\nDavid\n\n\n", "msg_date": "Tue, 22 Jun 2021 13:55:10 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "On Tue, Jun 22, 2021 at 1:55 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Tue, 22 Jun 2021 at 03:43, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I kind of wonder if we really need four different hash table\n> > implementations (this being the third \"generic\" one, plus hash join\n> > has its own, and I may have forgotten others). Should we instead\n> > think about revising simplehash to gain the benefits of this patch?\n>\n> hmm, yeah. I definitely agree with trying to have as much reusable\n> code as we can when we can. It certainly reduces maintenance and bugs\n> tend to be found more quickly too. It's a very worthy cause.\n\nIt is indeed really hard to decide when you have a new thing, and when\nyou need a new way to parameterise the existing generic thing. I\nwondered about this how-many-hash-tables-does-it-take question a lot\nwhen writing dshash.c (a chaining hash table that can live in weird\ndsa.c memory, backed by DSM segments created on the fly that may be\nmapped at different addresses in each backend, and has dynahash-style\npartition locking), and this was around the time Andres was talking\nabout simplehash. In retrospect, I'd probably kick out the built-in\nlocking and partitions and instead let callers create their own\npartitioning scheme on top from N tables, and that'd remove one quirk,\nleaving only the freaky pointers and allocator. I recall from a\nprevious life that Boost's unordered_map template is smart enough to\nsupport running in shared memory mapped at different addresses just\nthrough parameterisation that controls the way it deals with internal\npointers (boost::unordered_map<..., ShmemAllocator>), which seemed\npretty clever to me, and it might be achievable to do the same with a\ngeneric hash table for us that could take over dshash's specialness.\n\nOne idea I had at the time is that the right number of hash table\nimplementations in our tree is two: one for chaining (like dynahash)\nand one for open addressing/probing (like simplehash), and that\neverything else should be hoisted out (locking, partitioning) or made\ninto template parameters through the generic programming technique\nthat simplehash.h has demonstrated (allocators, magic pointer type for\ninternal pointers, plus of course the inlinable ops). But that was\nbefore we'd really fully adopted the idea of this style of template\ncode. (I also assumed the weird memory stuff would be temporary and\nwe'd move to threads, but that's another topic for another thread.)\nIt seems like you'd disagree with this, and you'd say the right number\nis three. But it's also possible to argue for one...\n\nA more superficial comment: I don't like calling hash tables \"hash\".\nI blame perl.\n\n\n", "msg_date": "Tue, 22 Jun 2021 14:48:51 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "On Tue, 22 Jun 2021 at 14:49, Thomas Munro <thomas.munro@gmail.com> wrote:\n> One idea I had at the time is that the right number of hash table\n> implementations in our tree is two: one for chaining (like dynahash)\n> and one for open addressing/probing (like simplehash), and that\n> everything else should be hoisted out (locking, partitioning) or made\n> into template parameters through the generic programming technique\n> that simplehash.h has demonstrated (allocators, magic pointer type for\n> internal pointers, plus of course the inlinable ops). But that was\n> before we'd really fully adopted the idea of this style of template\n> code. (I also assumed the weird memory stuff would be temporary and\n> we'd move to threads, but that's another topic for another thread.)\n> It seems like you'd disagree with this, and you'd say the right number\n> is three. But it's also possible to argue for one...\n\nI guess we could also ask ourselves how many join algorithms we need.\nWe have 3.something. None of which is perfect for every job. That's\nwhy we have multiple. I wonder why this is different.\n\nJust for anyone who missed it, the reason I wrote generichash and\ndidn't just use simplehash is that it's not possible to point any\nother pointers to a simplehash table because these get shuffled around\nduring insert/delete. For the locallock stuff over on [1] we need the\nLOCALLOCK object to be stable as we point to these from the resource\nmanager. Likewise here for SMgr, we point to SMgrRelationData objects\nfrom RelationData. We can't have the hash table implementation swap\nthese out from under us.\n\nAdditionally, I coded generichash to fix the very slow hash seq scan\nproblem that we have in LockReleaseAll() when a transaction has run in\nthe backend that took lots of locks and caused the locallock hash\ntable to bloat. Later when we run transactions that just grab a few\nlocks it takes us a relatively long time to do LockReleaseAll()\nbecause we have to skip all those empty hash table buckets in the\nbloated table. (See iterate_sparse_table.png and\niterate_very_sparse_table.png)\n\nI just finished writing a benchmark suite for comparing simplehash to\ngenerichash. I did this as a standalone C program. See the attached\nhashbench.tar.gz. You can run the tests with just ./test.sh. Just be\ncareful if compiling manually as test.sh passes -DHAVE__BUILTIN_CTZ\n-DHAVE_LONG_INT_64 which have quite a big effect on the performance of\ngenerichash due to it using pg_rightmost_one_pos64() when searching\nthe bitmaps for used items.\n\nI've attached graphs showing the results I got from running test.sh on\nmy AMD 3990x machine. Because the size of the struct being hashed\nmatters a lot to the performance of simplehash, I ran tests with 8,\n16, 32, 64, 128, 256-byte structs. This matters because simplehash\ndoes memcpy() on this when moving stuff around during insert/delere.\nThe size of the \"payload\" matters a bit less to generichash.\n\nYou can see that the lookup performance of generichash very similar to\nsimplehash. The insert/delete test shows the generichash is very\nslightly slower from 8-128 bytes but wins when simplehash has to\ntackle 256 bytes of data.\n\nThe seq scan tests show that simplehash is better when the table is\nfull of items, but it's terrible when bucket array is only sparsely\npopulated. I needed generichash to be fast at this for\nLockReleaseAll(). I might be able to speed up generichash iteration\nwhen the table is full a bit more by checking if the segment is full\nand skipping to the next item rather than consulting the bitmap. That\nwill slow down the sparse case a bit though. Not sure if it's worth\nit.\n\nAnyway, what I hope to show here is that there is no one-size-fits-all\nhash table.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvoKqWRxw5nnUPZ8+mAJKHPOPxYGoY1gQdh0WeS4+biVhg@mail.gmail.com", "msg_date": "Tue, 22 Jun 2021 18:51:34 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "On Tue, Jun 22, 2021 at 6:51 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> I guess we could also ask ourselves how many join algorithms we need.\n\nDavid and I discussed this a bit off-list, and I just wanted to share\nhow I understand the idea so far in case it helps someone else. There\nare essentially three subcomponents working together:\n\n1. A data structure similar in some ways to a C++ std::deque<T>,\nwhich gives O(1) access to elements by index, is densely packed to\nenable cache-friendly scanning of all elements, has stable addresses\n(as long as you only add new elements at the end or overwrite existing\nslots), and is internally backed by an array of pointers to a set of\nchunks.\n\n2. A bitmapset that tracks unused elements in 1, making it easy to\nfind the lowest-index hole when looking for a place to put a new one\nby linear search for a 1 bit, so that we tend towards maximum density\ndespite having random frees from time to time (seems good, the same\nidea is used in kernels to allocate the lowest unused file descriptor\nnumber).\n\n3. A hash table that has as elements indexes into 1. It somehow hides\nthe difference between keys (what callers look things up with) and\nkeys reachable by following an index into 1 (where elements' keys\nlive).\n\nOne thought is that you could do 1 as a separate component as the\n\"primary\" data structure, and use a plain old simplehash for 3 as a\nkind of index into it, but use pointers (rather than indexes) to\nobjects in 1 as elements. I don't know if it's better.\n\n\n", "msg_date": "Wed, 23 Jun 2021 12:17:17 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "On Wed, 23 Jun 2021 at 12:17, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> David and I discussed this a bit off-list, and I just wanted to share\n> how I understand the idea so far in case it helps someone else. There\n> are essentially three subcomponents working together:\n\nThanks for taking an interest in this. I started looking at your idea\nand I've now changed my mind from just not liking it to thinking that\nthe whole idea is just completely horrible :-(\n\nIt gets really messy with all the nested pre-processor stuff around\nfetching the element from the segmented array inside simplehash. One\nproblem is that simplehash needs the address of the segments despite\nsimplehash not knowing anything about segments. I've tried to make\nthat work by passing in the generic hash struct as simplehash's\nprivate_data. This ends up with deeply nested macros all defined in\ndifferent files. I pitty the future person debugging that.\n\nThere is also a problem of how to reference simplehash functions\ninside the generichash code. It's not possible to do things like\nSH_CREATE() because all those macros are undefined at the end of\nsimplehash.h. It's no good to hardcode the names either as GH_PREFIX\nmust be used, else it wouldn't be possible to use more than 1\ndifferenrly defined hash table per .c file. Fixing this means either\nmodifying simplehash.h to not undefine all the name macros at the end\nmaybe with SH_NOUNDEF or creating another set of macros to build the\nnames for the simplehash functions inside the generic hash code. I\ndon't like either of those ideas.\n\nThere are also a bunch of changes / API breakages that need to be done\nto make this work with simplehash.h.\n\n1) Since I really need 8-byte buckets in the hash table to make this\nas fast as possible, I want to use the array index for the hash status\nand that means changing the simplehash API to allow that to work.\nThis requires something like SH_IS_BUCKET_INUSE, SH_SET_BUCKET_INUSE,\nSH_SET_BUCKET_EMPTY.\n2) I need to add a new memory allocation function to not zero the\nmemory. At the moment all hash buckets are emptied when creating a\ntable by zeroing the bucket memory. If someone defines\nSH_SET_BUCKET_EMPTY to do something that says 0 memory is not empty,\nthen that won't work. So I need to allocate the bucket memory then\ncall SH_SET_BUCKET_EMPTY on each bucket.\n3) I'll need to replace SH_KEY with something more complex. Since the\nsimplehash bucket will just have a uint32 hashvalue and uint32 index,\nthe hash key is not stored in the bucket, it's stored over in the\nsegment. I'll need to replace SH_KEY with SH_GETKEY and SH_SETKEY.\nThese will need to consult the simplehash's private_data so that the\nelement can be found in the segmented array.\n\nAlso, simplehash internally manages when the hash table needs to grow.\nI'll need to perform separate checks to see if the segmented array\nalso must grow. It's a bit annoying to double up those checks as\nthey're in a very hot path as they're done everytime someone inserts\ninto the table.\n\n> 2. A bitmapset that tracks unused elements in 1, making it easy to\n> find the lowest-index hole when looking for a place to put a new one\n> by linear search for a 1 bit, so that we tend towards maximum density\n> despite having random frees from time to time (seems good, the same\n> idea is used in kernels to allocate the lowest unused file descriptor\n> number).\n\nI didn't use Bitmapsets. I wanted the bitmaps to be allocated in the\nsame chunk of memory as the segments of the array. Also, because\nbitmapset's nwords is variable, then they can't really do any loop\nunrolling. Since in my implementation the number of bitmap words are\nknown at compile-time, the compiler has the flexibility to do loop\nunrolling. The bitmap manipulation is one of the biggest overheads in\ngenerichash.h. I'd prefer to keep that as fast as possible.\n\n> 3. A hash table that has as elements indexes into 1. It somehow hides\n> the difference between keys (what callers look things up with) and\n> keys reachable by following an index into 1 (where elements' keys\n> live).\n\nI think that can be done, but it would require looking up the\nsegmented array twice instead of once. The first time would be when\nwe compare the keys after seeing the hash values match. The final time\nwould be in the calling code to translate the index to the pointer.\nHopefully the compiler would be able to optimize that to a single\nlookup.\n\n> One thought is that you could do 1 as a separate component as the\n> \"primary\" data structure, and use a plain old simplehash for 3 as a\n> kind of index into it, but use pointers (rather than indexes) to\n> objects in 1 as elements. I don't know if it's better.\n\nUsing pointers would double the bucket width on a 64 bit machine. I\ndon't want to do that. Also, to be able to determine the segment from\nthe pointer it would require looping over each segment to check if the\npointer belongs there. With the index we can determine the segment\ndirectly with bit-shifting the index.\n\nSo, with all that. I really don't think it's a great idea to try and\nhave this use simplehash.h code. I plan to pursue the idea I proposed\nwith having seperate hash table code that is coded properly to have\nstable pointers into the data rather than trying to contort\nsimplehash's code into working that way.\n\nDavid\n\n\n", "msg_date": "Wed, 30 Jun 2021 23:14:15 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "On Wed, Jun 30, 2021 at 11:14 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Wed, 23 Jun 2021 at 12:17, Thomas Munro <thomas.munro@gmail.com> wrote:\n> Thanks for taking an interest in this. I started looking at your idea\n> and I've now changed my mind from just not liking it to thinking that\n> the whole idea is just completely horrible :-(\n\nHah.\n\nI accept that trying to make a thing that \"wraps\" these data\nstructures and provides a simple interface is probably really quite\nhorrible with preprocessor voodoo.\n\nI was mainly questioning how bad it would be if we had a generic\nsegmented array component (seems like a great idea, which I'm sure\nwould find other uses, I recall wanting to write that myself before),\nand then combined that with the presence map idea to make a dense\nobject pool (ditto), but then, in each place where we need something\nlike this, just used a plain old hash table to point directly to\nobjects in it whenever we needed that, open coding the logic to keep\nit in sync (I mean, just the way that people usually use hash tables).\nThat way, the object pool can give you very fast scans over all\nobjects in cache friendly order (no linked lists), and the hash table\ndoesn't know/care about its existence. In other words, small reusable\ncomponents that each do one thing well and are not coupled together.\n\nI think I understand now that you really, really want small index\nnumbers and not 64 bit pointers in the hash table. Hmm.\n\n> It gets really messy with all the nested pre-processor stuff around\n> fetching the element from the segmented array inside simplehash. One\n> problem is that simplehash needs the address of the segments despite\n> simplehash not knowing anything about segments. I've tried to make\n> that work by passing in the generic hash struct as simplehash's\n> private_data. This ends up with deeply nested macros all defined in\n> different files. I pitty the future person debugging that.\n\nYeah, that sounds terrible.\n\n> There are also a bunch of changes / API breakages that need to be done\n> to make this work with simplehash.h.\n>\n> 1) Since I really need 8-byte buckets in the hash table to make this\n> as fast as possible, I want to use the array index for the hash status\n> and that means changing the simplehash API to allow that to work.\n> This requires something like SH_IS_BUCKET_INUSE, SH_SET_BUCKET_INUSE,\n> SH_SET_BUCKET_EMPTY.\n\n+1 for doing customisable \"is in use\" checks on day anyway, as a\nseparate project. Not sure if any current users could shrink their\nstructs in practice because, at a glance, the same amount of space\nmight be used by padding anyway, but when a case like that shows up...\n\n> > 2. A bitmapset that tracks unused elements in 1, making it easy to\n> > find the lowest-index hole when looking for a place to put a new one\n> > by linear search for a 1 bit, so that we tend towards maximum density\n> > despite having random frees from time to time (seems good, the same\n> > idea is used in kernels to allocate the lowest unused file descriptor\n> > number).\n>\n> I didn't use Bitmapsets. I wanted the bitmaps to be allocated in the\n> same chunk of memory as the segments of the array. Also, because\n> bitmapset's nwords is variable, then they can't really do any loop\n> unrolling. Since in my implementation the number of bitmap words are\n> known at compile-time, the compiler has the flexibility to do loop\n> unrolling. The bitmap manipulation is one of the biggest overheads in\n> generichash.h. I'd prefer to keep that as fast as possible.\n\nI think my hands autocompleted \"bitmapset\", I really meant to write\njust \"bitmap\" as I didn't think you were using the actual thing called\nbitmapset, but point taken.\n\n> So, with all that. I really don't think it's a great idea to try and\n> have this use simplehash.h code. I plan to pursue the idea I proposed\n> with having seperate hash table code that is coded properly to have\n> stable pointers into the data rather than trying to contort\n> simplehash's code into working that way.\n\nFair enough.\n\nIt's not that I don't believe it's a good idea to be able to perform\ncache-friendly iteration over densely packed objects... that part\nsounds great... it's just that it's not obvious to me that it should\nbe a *hashtable's* job to provide that access path. Perhaps I lack\nimagination and we'll have to agree to differ.\n\n\n", "msg_date": "Thu, 1 Jul 2021 12:59:27 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "On Thu, 1 Jul 2021 at 13:00, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Wed, Jun 30, 2021 at 11:14 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > 1) Since I really need 8-byte buckets in the hash table to make this\n> > as fast as possible, I want to use the array index for the hash status\n> > and that means changing the simplehash API to allow that to work.\n> > This requires something like SH_IS_BUCKET_INUSE, SH_SET_BUCKET_INUSE,\n> > SH_SET_BUCKET_EMPTY.\n>\n> +1 for doing customisable \"is in use\" checks on day anyway, as a\n> separate project. Not sure if any current users could shrink their\n> structs in practice because, at a glance, the same amount of space\n> might be used by padding anyway, but when a case like that shows up...\n\nYeah, I did look at that when messing with simplehash when working on\nResult Cache a few months ago. I found all current usages have at\nleast a free byte, so I wasn't motivated to allow custom statuses to\nbe defined.\n\nThere's probably a small tidy up to do in simplehash maybe along with\nthat patch. If you look at SH_GROW, for example, you'll see various\nformations of:\n\nif (oldentry->status != SH_STATUS_IN_USE)\nif (oldentry->status == SH_STATUS_IN_USE)\nif (newentry->status == SH_STATUS_EMPTY)\n\nI'm not all that sure why there's a need to distinguish !=\nSH_STATUS_IN_USE from == SH_STATUS_EMPTY. I can only imagine that\nAndres was messing around with tombstoning and at one point had a 3rd\nstatus in a development version. There are some minor inefficiencies\nas a result of this, e.g in SH_DELETE, the code does:\n\nif (entry->status == SH_STATUS_EMPTY)\n return false;\n\nif (entry->status == SH_STATUS_IN_USE &&\n SH_COMPARE_KEYS(tb, hash, key, entry))\n\nThat SH_STATUS_IN_USE check is always true.\n\nDavid\n\n\n", "msg_date": "Thu, 1 Jul 2021 13:16:52 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "On Tue, Jun 22, 2021 at 02:15:26AM +1200, David Rowley wrote:\n[...]\n> \n> I've come up with a new hash table implementation that I've called\n> generichash. It works similarly to simplehash in regards to the\n\nHi David,\n\nAre you planning to work on this in this CF?\nThis is marked as \"Ready for committer\" but it doesn't apply anymore.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Fri, 24 Sep 2021 03:26:26 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "On Fri, 24 Sept 2021 at 20:26, Jaime Casanova\n<jcasanov@systemguards.com.ec> wrote:\n> Are you planning to work on this in this CF?\n> This is marked as \"Ready for committer\" but it doesn't apply anymore.\n\nI've attached an updated patch. Since this patch is pretty different\nfrom the one that was marked as ready for committer, I'll move this to\nneeds review.\n\nHowever, I'm a bit disinclined to go ahead with this patch at all.\nThomas made it quite clear it's not for the patch, and on discussing\nthe patch with Andres, it turned out he does not like the idea either.\nAndres' argument was along the lines of bitmaps being slow. The hash\ntable uses bitmaps to record which items in each segment are in use. I\ndon't really agree with him about that, so we'd likely need some more\ncomments to help reach a consensus about if we want this or not.\n\nMaybe Andres has more comments, so I've included him here.\n\nDavid", "msg_date": "Mon, 27 Sep 2021 16:30:25 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "On Mon, Sep 27, 2021 at 04:30:25PM +1300, David Rowley wrote:\n> On Fri, 24 Sept 2021 at 20:26, Jaime Casanova\n> <jcasanov@systemguards.com.ec> wrote:\n> > Are you planning to work on this in this CF?\n> > This is marked as \"Ready for committer\" but it doesn't apply anymore.\n> \n> I've attached an updated patch. Since this patch is pretty different\n> from the one that was marked as ready for committer, I'll move this to\n> needs review.\n> \n> However, I'm a bit disinclined to go ahead with this patch at all.\n> Thomas made it quite clear it's not for the patch, and on discussing\n> the patch with Andres, it turned out he does not like the idea either.\n> Andres' argument was along the lines of bitmaps being slow. The hash\n> table uses bitmaps to record which items in each segment are in use. I\n> don't really agree with him about that, so we'd likely need some more\n> comments to help reach a consensus about if we want this or not.\n> \n> Maybe Andres has more comments, so I've included him here.\n> \n\nHi David,\n\nThanks for the updated patch.\n\nBased on your comments I will mark this patch as withdrawn at midday of \nmy monday unless someone objects to that.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Mon, 4 Oct 2021 02:37:13 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "On Mon, 4 Oct 2021 at 20:37, Jaime Casanova\n<jcasanov@systemguards.com.ec> wrote:\n> Based on your comments I will mark this patch as withdrawn at midday of\n> my monday unless someone objects to that.\n\nI really think we need a hash table implementation that's faster than\ndynahash and supports stable pointers to elements (simplehash does not\nhave stable pointers). I think withdrawing this won't help us move\ntowards getting that.\n\nThomas voiced his concerns here about having an extra hash table\nimplementation and then also concerns that I've coded the hash table\ncode to be fast to iterate over the hashed items. To be honest, I\nthink both Andres and Thomas must be misunderstanding the bitmap part.\nI get the impression that they both think the bitmap is solely there\nto make interations faster, but in reality it's primarily there as a\ncompact freelist and can also be used to make iterations over sparsely\npopulated tables fast. For the freelist we look for 0-bits, and we\nlook for 1-bits during iteration.\n\nI think I'd much rather talk about the concerns here than just\nwithdraw this. Even if what I have today just serves as something to\naid discussion.\n\nIt would also be good to get the points Andres raised with me off-list\non this thread. I think his primary concern was that bitmaps are\nslow, but I don't really think maintaining full pointers into freed\nitems is going to improve the performance of this.\n\nDavid\n\n\n", "msg_date": "Tue, 5 Oct 2021 11:07:48 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "Good day, David and all.\n\nВ Вт, 05/10/2021 в 11:07 +1300, David Rowley пишет:\n> On Mon, 4 Oct 2021 at 20:37, Jaime Casanova\n> <jcasanov@systemguards.com.ec> wrote:\n> > Based on your comments I will mark this patch as withdrawn at midday\n> > of\n> > my monday unless someone objects to that.\n> \n> I really think we need a hash table implementation that's faster than\n> dynahash and supports stable pointers to elements (simplehash does not\n> have stable pointers). I think withdrawing this won't help us move\n> towards getting that.\n\nAgree with you. I believe densehash could replace both dynahash and\nsimplehash. Shared memory usages of dynahash should be reworked to\nother less dynamic hash structure. So there should be densehash for\nlocal hashes and statichash for static shared memory.\n\ndensehash slight slowness compared to simplehash in some operations\ndoesn't worth keeping simplehash beside densehash.\n\n> Thomas voiced his concerns here about having an extra hash table\n> implementation and then also concerns that I've coded the hash table\n> code to be fast to iterate over the hashed items. To be honest, I\n> think both Andres and Thomas must be misunderstanding the bitmap part.\n> I get the impression that they both think the bitmap is solely there\n> to make interations faster, but in reality it's primarily there as a\n> compact freelist and can also be used to make iterations over sparsely\n> populated tables fast. For the freelist we look for 0-bits, and we\n> look for 1-bits during iteration.\n\nI think this part is overengineered. More below.\n\n> I think I'd much rather talk about the concerns here than just\n> withdraw this. Even if what I have today just serves as something to\n> aid discussion.\n> \n> It would also be good to get the points Andres raised with me off-list\n> on this thread. I think his primary concern was that bitmaps are\n> slow, but I don't really think maintaining full pointers into freed\n> items is going to improve the performance of this.\n> \n> David\n\nFirst on \"quirks\" in the patch I was able to see:\n\nDH_NEXT_ZEROBIT:\n\n DH_BITMAP_WORD mask = (~(DH_BITMAP_WORD) 0) << DH_BITNUM(prevbit);\n DH_BITMAP_WORD word = ~(words[wordnum] & mask); /* flip bits */\n\nreally should be\n\n DH_BITMAP_WORD mask = (~(DH_BITMAP_WORD) 0) << DH_BITNUM(prevbit);\n DH_BITMAP_WORD word = (~words[wordnum]) & mask; /* flip bits */\n\nBut it doesn't harm because DH_NEXT_ZEROBIT is always called with\n`prevbit = -1`, which is incremented to `0`. Therefore `mask` is always\n`0xffff...ff`.\n\nDH_INDEX_TO_ELEMENT\n\n /* ensure this segment is marked as used */\nshould be\n /* ensure this item is marked as used in the segment */\n\nDH_GET_NEXT_UNUSED_ENTRY\n\n /* find the first segment with an unused item */\n while (seg != NULL && seg->nitems == DH_ITEMS_PER_SEGMENT)\n seg = tb->segments[++segidx];\n\nNo protection for `++segidx <= tb->nsegments` . I understand, it could\nnot happen due to `grow_threshold` is always lesser than\n`nsegment * DH_ITEMS_PER_SEGMENT`. But at least comment should be\nleaved about legality of absence of check.\n\nNow architecture notes:\n\nI don't believe there is need for configurable DH_ITEMS_PER_SEGMENT. I\ndon't event believe it should be not equal to 16 (or 8). Then segment\nneeds only one `used_items` word, which simplifies code a lot.\nThere is no much difference in overhead between 1/16 and 1/256.\n\nAnd then I believe, segment doesn't need both `nitems` and `used_items`.\nCondition \"segment is full\" will be equal to `used_items == 0xffff`.\n\nNext, I think it is better to make real free list instead of looping to\nsearch such one. Ie add `uint32 DH_SEGMENT->next` field and maintain\nlist starting from `first_free_segment`.\nIf concern were \"allocate from lower-numbered segments first\", than min-\nheap could be created. It is possible to create very efficient non-\nbalanced \"binary heap\" with just two fields (`uint32 left, right`).\nAlgorithmic PoC in Ruby language is attached.\n\nThere is also allocation concern: AllocSet tends to allocate in power2\nsizes. Use of power2 segments with header (nitems/used_items) certainly\nwill lead to wasted 2x space on every segment if element size is also\npower2, and a bit lesser for other element sizes.\nThere could be two workarounds:\n- make segment a bit less capable (15 elements instead of 16)\n- move header from segment itself to `DH_TYPE->segments` array.\nI think, second option is more prefered:\n- `DH_TYPE->segments[x]` inevitable accessed on every operation,\n therefore why not store some info here?\n- if nitems/used_items will be in `DH_TYPE->segments[x]`, then\n hashtable iteration doesn't need bitmap at all - there will be no need\n in `DH_TYPE->used_segments` bitmap. Absence of this bitmap will\n reduce overhead on usual operations (insert/delete) as well.\n\nHope I was useful.\n\nregards\n\nYura Sokolov\ny.sokolov@postgrespro.ru\nfunny.falcon@gmail.com", "msg_date": "Wed, 06 Oct 2021 19:15:38 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "On Tue, Oct 05, 2021 at 11:07:48AM +1300, David Rowley wrote:\n> I think I'd much rather talk about the concerns here than just\n> withdraw this. Even if what I have today just serves as something to\n> aid discussion.\n\nHmm. This last update was two months ago, and the patch does not\napply anymore. I am marking it as RwF for now.\n--\nMichael", "msg_date": "Fri, 3 Dec 2021 15:55:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" }, { "msg_contents": "Hi,\n\nOn 2021-04-25 03:58:38 +1200, David Rowley wrote:\n> Currently, we use dynahash hash tables to store the SMgrRelation so we\n> can perform fast lookups by RelFileNodeBackend. However, I had in mind\n> that a simplehash table might perform better. So I tried it...\n\n> The test case was basically inserting 100 million rows one at a time\n> into a hash partitioned table with 1000 partitions and 2 int columns\n> and a primary key on one of those columns. It was about 12GB of WAL. I\n> used a hash partitioned table in the hope to create a fairly\n> random-looking SMgr hash table access pattern. Hopefully something\n> similar to what might happen in the real world.\n\nA potentially stupid question: Do we actually need to do smgr lookups in this\npath? Afaict nearly all of the buffer lookups here will end up as cache hits in\nshared buffers, correct?\n\nAfaict we'll do two smgropens in a lot of paths:\n1) XLogReadBufferExtended() does smgropen so it can do smgrnblocks()\n2) ReadBufferWithoutRelcache() does an smgropen()\n\nIt's pretty sad that we constantly do two smgropen()s to start with. But in\nthe cache hit path we don't actually need an smgropen in either case afaict.\n\nReadBufferWithoutRelcache() does an smgropen, because that's\nReadBuffer_common()'s API. Which in turn has that API because it wants to use\nRelationGetSmgr() when coming from ReadBufferExtended(). It doesn't seem\nawful to allow smgr to be NULL and to pass in the rlocator in addition.\n\nXLogReadBufferExtended() does an smgropen() so it can do smgrcreate() and\nsmgrnblocks(). But neither is needed in the cache hit case, I think. We could\ndo a \"read-only\" lookup in s_b, and only do the current logic in case that\nfails?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 23 Oct 2022 17:05:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Use simplehash.h instead of dynahash in SMgr" } ]
[ { "msg_contents": "\nI would like to undertake some housekeeping on PostgresNode.pm.\n\n1. OO modules in perl typically don't export anything. We should remove\nthe export settings. That would mean that clients would have to call\n\"PostgresNode->get_new_node()\" (but see item 2) and\n\"PostgresNode::get_free_port()\" instead of the unadorned calls they use now.\n\n2. There are two constructors, new() and get_new_node(). AFAICT nothing\nin our tests uses new(), and they almost certainly shouldn't anyway.\nget_new_node() calls new() to do some work, and I'd like to merge these\ntwo. The name of a constructor in perl is conventionally \"new\" as it is\nin many other OO languages, although in perl this can't apply where a\nclass provides more than one constructor. Still, if we're merging them\nthen the preference would be to call the merged function \"new\". Since\nwe'd proposing to modify the calls anyway (see item 1) this shouldn't\nimpose a huge extra workload.\n\nThese changes would make the module look more like a conventional perl\nmodule.\n\nAnother item that needs looking at is the consistent use of Carp.\nPostgresNode, TestLib and RecursiveCopy all use the Carp module, but\ncontain numerous calls to \"die\" where they should probably have calls to\n\"croak\" or \"confess\".\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 24 Apr 2021 15:09:45 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "cleaning up PostgresNode.pm" }, { "msg_contents": "On 2021-Apr-24, Andrew Dunstan wrote:\n\n> \n> I would like to undertake some housekeeping on PostgresNode.pm.\n> \n> 1. OO modules in perl typically don't export anything. We should remove\n> the export settings. That would mean that clients would have to call\n> \"PostgresNode->get_new_node()\" (but see item 2) and\n> \"PostgresNode::get_free_port()\" instead of the unadorned calls they use now.\n\n+1\n\n> 2. There are two constructors, new() and get_new_node(). AFAICT nothing\n> in our tests uses new(), and they almost certainly shouldn't anyway.\n> get_new_node() calls new() to do some work, and I'd like to merge these\n> two. The name of a constructor in perl is conventionally \"new\" as it is\n> in many other OO languages, although in perl this can't apply where a\n> class provides more than one constructor. Still, if we're merging them\n> then the preference would be to call the merged function \"new\". Since\n> we'd proposing to modify the calls anyway (see item 1) this shouldn't\n> impose a huge extra workload.\n\n+1 on \"new\". I think we weren't 100% clear on where we wanted it to go\ninitially, but it's now clear that get_new_node() is the constructor,\nand that new() is merely a helper. So let's rename them in a sane way.\n\n> Another item that needs looking at is the consistent use of Carp.\n> PostgresNode, TestLib and RecursiveCopy all use the Carp module, but\n> contain numerous calls to \"die\" where they should probably have calls to\n> \"croak\" or \"confess\".\n\nI wonder if it would make sense to think of PostgresNode as a feeder of\nsorts to Test::More and the like, so make it use diag(), note(),\nexplain().\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"If you have nothing to say, maybe you need just the right tool to help you\nnot say it.\" (New York Times, about Microsoft PowerPoint)\n\n\n", "msg_date": "Sat, 24 Apr 2021 15:14:44 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: cleaning up PostgresNode.pm" }, { "msg_contents": "On 4/24/21 3:14 PM, Alvaro Herrera wrote:\n> On 2021-Apr-24, Andrew Dunstan wrote:\n>\n>> I would like to undertake some housekeeping on PostgresNode.pm.\n>>\n>> 1. OO modules in perl typically don't export anything. We should remove\n>> the export settings. That would mean that clients would have to call\n>> \"PostgresNode->get_new_node()\" (but see item 2) and\n>> \"PostgresNode::get_free_port()\" instead of the unadorned calls they use now.\n> +1\n>\n>> 2. There are two constructors, new() and get_new_node(). AFAICT nothing\n>> in our tests uses new(), and they almost certainly shouldn't anyway.\n>> get_new_node() calls new() to do some work, and I'd like to merge these\n>> two. The name of a constructor in perl is conventionally \"new\" as it is\n>> in many other OO languages, although in perl this can't apply where a\n>> class provides more than one constructor. Still, if we're merging them\n>> then the preference would be to call the merged function \"new\". Since\n>> we'd proposing to modify the calls anyway (see item 1) this shouldn't\n>> impose a huge extra workload.\n> +1 on \"new\". I think we weren't 100% clear on where we wanted it to go\n> initially, but it's now clear that get_new_node() is the constructor,\n> and that new() is merely a helper. So let's rename them in a sane way.\n>\n>> Another item that needs looking at is the consistent use of Carp.\n>> PostgresNode, TestLib and RecursiveCopy all use the Carp module, but\n>> contain numerous calls to \"die\" where they should probably have calls to\n>> \"croak\" or \"confess\".\n> I wonder if it would make sense to think of PostgresNode as a feeder of\n> sorts to Test::More and the like, so make it use diag(), note(),\n> explain().\n>\n\n\n\n\nHere is a set of small(ish) patches that does most of the above and then\nsome.\n\n\nPatch 1 adds back the '-w' flag to pg_ctl in the start() method. It's\nredundant on modern versions of Postgres but it's harmless, and helps\nwith subclassing for older versions where it wasn't the default.\n\nPatch 2 adds a method for altering config files as opposed to just\nappending to them. Again, this helps a lot in subclassing for older\nversions, which can call the parent's init() and then adjust whatever\ndoesn't work.\n\nPatch 3 unifies the constructor methods and stops exporting a\nconstructor. There is one constructor: PostgresNode::new()\n\nPatch 4 removes what's left of Exporter in PostgresNode, so it becomes a\npure OO style module.\n\nPatch 5 adds a method for getting the major version string from a\nPostgresVersion object, again useful in subclassing.\n\nPatch 6 adds a method for getting the install_path of a PostgresNode\nobject. While not strictly necessary it's consistent with other fields\nthat have getter methods. Clients should not pry into the internals of\nobjects. Experience has shown this method to be useful.\n\nPatches 7 8 and 9 contain additions to Patch 3 for things that I\noverlooked or that were not present when I originally prepared the\npatches. They would be applied alongside Patch 3, not separately.\n\n\n\nThese patches are easily broken by e.g. the addition of a new TAP test\nor the modification of an existing test. So I'm hoping to get these\nadded soon. I will add this email to the CF.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 28 Jun 2021 13:02:37 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: cleaning up PostgresNode.pm" }, { "msg_contents": "On Mon, Jun 28, 2021 at 01:02:37PM -0400, Andrew Dunstan wrote:\n> Patch 1 adds back the '-w' flag to pg_ctl in the start() method. It's\n> redundant on modern versions of Postgres but it's harmless, and helps\n> with subclassing for older versions where it wasn't the default.\n\n05cd12e applied to all the actions, so wouldn't it be more consistent\nto do the same for stop(), restart() and promote()?\n\n> Patch 2 adds a method for altering config files as opposed to just\n> appending to them. Again, this helps a lot in subclassing for older\n> versions, which can call the parent's init() and then adjust whatever\n> doesn't work.\n\n+unless skip_equals is true, in which case it will write\nNit: two spaces here.\n\n+Modify the named config file setting with the value. If the value is undefined,\n+instead delete the setting. If the setting is not present no action is taken.\nThis should mention that parameters commented out are ignored?\n\nskip_equals is not used. The only caller of adjust_conf is\nPostgresNode itself.\n\n> Patch 3 unifies the constructor methods and stops exporting a\n> constructor. There is one constructor: PostgresNode::new()\n\nNice^2. I agree that this is an improvement.\n\n> Patch 4 removes what's left of Exporter in PostgresNode, so it becomes a\n> pure OO style module.\n\nI have mixed feelings on this one, in a range of -0.1~0.1+, but please\ndon't consider that as a strong objection either.\n\n> Patch 5 adds a method for getting the major version string from a\n> PostgresVersion object, again useful in subclassing.\n\nWFM.\n\n> Patch 6 adds a method for getting the install_path of a PostgresNode\n> object. While not strictly necessary it's consistent with other fields\n> that have getter methods. Clients should not pry into the internals of\n> objects. Experience has shown this method to be useful.\n\nI have done that as well when looking at the test business with\npg_upgrade.\n\n> Patches 7 8 and 9 contain additions to Patch 3 for things that I\n> overlooked or that were not present when I originally prepared the\n> patches. They would be applied alongside Patch 3, not separately.\n\nThat happens.\n\n> These patches are easily broken by e.g. the addition of a new TAP test\n> or the modification of an existing test. So I'm hoping to get these\n> added soon. I will add this email to the CF.\n\nI doubt that anybody would complain about any of the changes you are\ndoing here. It would be better to get that merged early in the\ndevelopment cycle on the contrary.\n--\nMichael", "msg_date": "Wed, 30 Jun 2021 13:35:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: cleaning up PostgresNode.pm" }, { "msg_contents": "\nOn 6/30/21 12:35 AM, Michael Paquier wrote:\n> On Mon, Jun 28, 2021 at 01:02:37PM -0400, Andrew Dunstan wrote:\n>> Patch 1 adds back the '-w' flag to pg_ctl in the start() method. It's\n>> redundant on modern versions of Postgres but it's harmless, and helps\n>> with subclassing for older versions where it wasn't the default.\n> 05cd12e applied to all the actions, so wouldn't it be more consistent\n> to do the same for stop(), restart() and promote()?\n\n\n\nYes to restart(), no to stop() as it's always been the default and\nwasn't changed by 05cd12e, no to promote() as it's been the default\nsince release 10 and wasn't a valid option before that according to the\nmanuals, hence changing it would actually be a backwards compatibility\nbarrier.\n\n\n>> Patch 2 adds a method for altering config files as opposed to just\n>> appending to them. Again, this helps a lot in subclassing for older\n>> versions, which can call the parent's init() and then adjust whatever\n>> doesn't work.\n> +unless skip_equals is true, in which case it will write\n> Nit: two spaces here.\n\n\nWill fix.\n\n\n> +Modify the named config file setting with the value. If the value is undefined,\n> +instead delete the setting. If the setting is not present no action is taken.\n> This should mention that parameters commented out are ignored?\n\n\nNot really. A commented out setting isn't present.\n\n\n> skip_equals is not used. The only caller of adjust_conf is\n> PostgresNode itself.\n\n\n\nWell, nothing is using it right now :-) It's intended to be available to\nsubclasses.\n\n\nMy current subclass code doesn't actually use skip_equals either, but\nearlier revisions did. Think of modifying pg_hba.conf.\n\n\n>> Patch 4 removes what's left of Exporter in PostgresNode, so it becomes a\n>> pure OO style module.\n> I have mixed feelings on this one, in a range of -0.1~0.1+, but please\n> don't consider that as a strong objection either.\n\n\n\n`perldoc perlmodlib` says: As a general rule, if the module is trying to\nbe object oriented then export nothing.\n\n\nI mostly follow that rule.\n\n\nAn alternative proposal would keep using Exporter but move get_free_node\nto @EXPORT_OK, again in line with standard perl advice to avoid use of\n@EXPORT, which means clients would have to import it explicitly with\n\"use PostgresNode qw(get_free_port);\" I don't think there's much gain\nfrom that though.\n\n\n>> These patches are easily broken by e.g. the addition of a new TAP test\n>> or the modification of an existing test. So I'm hoping to get these\n>> added soon. I will add this email to the CF.\n> I doubt that anybody would complain about any of the changes you are\n> doing here. It would be better to get that merged early in the\n> development cycle on the contrary.\n\n\n\nThat's my intention. Thanks for reviewing.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 30 Jun 2021 08:08:14 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: cleaning up PostgresNode.pm" }, { "msg_contents": "On 2021-Jun-30, Andrew Dunstan wrote:\n\n> On 6/30/21 12:35 AM, Michael Paquier wrote:\n> > On Mon, Jun 28, 2021 at 01:02:37PM -0400, Andrew Dunstan wrote:\n\n> > skip_equals is not used. The only caller of adjust_conf is\n> > PostgresNode itself.\n> \n> Well, nothing is using it right now :-) It's intended to be available to\n> subclasses.\n> \n> My current subclass code doesn't actually use skip_equals either, but\n> earlier revisions did. Think of modifying pg_hba.conf.\n\nI thought it was about recovery.conf ...\n\n-- \n�lvaro Herrera Valdivia, Chile\n https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 30 Jun 2021 08:30:22 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: cleaning up PostgresNode.pm" }, { "msg_contents": "\nOn 6/30/21 8:30 AM, Alvaro Herrera wrote:\n> On 2021-Jun-30, Andrew Dunstan wrote:\n>\n>> On 6/30/21 12:35 AM, Michael Paquier wrote:\n>>> On Mon, Jun 28, 2021 at 01:02:37PM -0400, Andrew Dunstan wrote:\n>>> skip_equals is not used. The only caller of adjust_conf is\n>>> PostgresNode itself.\n>> Well, nothing is using it right now :-) It's intended to be available to\n>> subclasses.\n>>\n>> My current subclass code doesn't actually use skip_equals either, but\n>> earlier revisions did. Think of modifying pg_hba.conf.\n> I thought it was about recovery.conf ...\n\n\n\nYes, that too.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 30 Jun 2021 09:30:53 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: cleaning up PostgresNode.pm" }, { "msg_contents": "On 6/28/21 1:02 PM, Andrew Dunstan wrote:\n> On 4/24/21 3:14 PM, Alvaro Herrera wrote:\n>> On 2021-Apr-24, Andrew Dunstan wrote:\n>>\n>>> I would like to undertake some housekeeping on PostgresNode.pm.\n>>>\n>>> 1. OO modules in perl typically don't export anything. We should remove\n>>> the export settings. That would mean that clients would have to call\n>>> \"PostgresNode->get_new_node()\" (but see item 2) and\n>>> \"PostgresNode::get_free_port()\" instead of the unadorned calls they use now.\n>> +1\n>>\n>>> 2. There are two constructors, new() and get_new_node(). AFAICT nothing\n>>> in our tests uses new(), and they almost certainly shouldn't anyway.\n>>> get_new_node() calls new() to do some work, and I'd like to merge these\n>>> two. The name of a constructor in perl is conventionally \"new\" as it is\n>>> in many other OO languages, although in perl this can't apply where a\n>>> class provides more than one constructor. Still, if we're merging them\n>>> then the preference would be to call the merged function \"new\". Since\n>>> we'd proposing to modify the calls anyway (see item 1) this shouldn't\n>>> impose a huge extra workload.\n>> +1 on \"new\". I think we weren't 100% clear on where we wanted it to go\n>> initially, but it's now clear that get_new_node() is the constructor,\n>> and that new() is merely a helper. So let's rename them in a sane way.\n>>\n>>> Another item that needs looking at is the consistent use of Carp.\n>>> PostgresNode, TestLib and RecursiveCopy all use the Carp module, but\n>>> contain numerous calls to \"die\" where they should probably have calls to\n>>> \"croak\" or \"confess\".\n>> I wonder if it would make sense to think of PostgresNode as a feeder of\n>> sorts to Test::More and the like, so make it use diag(), note(),\n>> explain().\n>>\n>\n>\n>\n> Here is a set of small(ish) patches that does most of the above and then\n> some.\n>\n>\n> Patch 1 adds back the '-w' flag to pg_ctl in the start() method. It's\n> redundant on modern versions of Postgres but it's harmless, and helps\n> with subclassing for older versions where it wasn't the default.\n>\n> Patch 2 adds a method for altering config files as opposed to just\n> appending to them. Again, this helps a lot in subclassing for older\n> versions, which can call the parent's init() and then adjust whatever\n> doesn't work.\n>\n> Patch 3 unifies the constructor methods and stops exporting a\n> constructor. There is one constructor: PostgresNode::new()\n>\n> Patch 4 removes what's left of Exporter in PostgresNode, so it becomes a\n> pure OO style module.\n>\n> Patch 5 adds a method for getting the major version string from a\n> PostgresVersion object, again useful in subclassing.\n>\n> Patch 6 adds a method for getting the install_path of a PostgresNode\n> object. While not strictly necessary it's consistent with other fields\n> that have getter methods. Clients should not pry into the internals of\n> objects. Experience has shown this method to be useful.\n>\n> Patches 7 8 and 9 contain additions to Patch 3 for things that I\n> overlooked or that were not present when I originally prepared the\n> patches. They would be applied alongside Patch 3, not separately.\n>\n>\n>\n> These patches are easily broken by e.g. the addition of a new TAP test\n> or the modification of an existing test. So I'm hoping to get these\n> added soon. I will add this email to the CF.\n>\n>\n\n\nNew version with a small change to fix bitrot.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 16 Jul 2021 15:32:26 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: cleaning up PostgresNode.pm" }, { "msg_contents": "On 7/16/21 3:32 PM, Andrew Dunstan wrote:\n> On 6/28/21 1:02 PM, Andrew Dunstan wrote:\n>> On 4/24/21 3:14 PM, Alvaro Herrera wrote:\n>>> On 2021-Apr-24, Andrew Dunstan wrote:\n>>>\n>>>> I would like to undertake some housekeeping on PostgresNode.pm.\n>>>>\n>>>> 1. OO modules in perl typically don't export anything. We should remove\n>>>> the export settings. That would mean that clients would have to call\n>>>> \"PostgresNode->get_new_node()\" (but see item 2) and\n>>>> \"PostgresNode::get_free_port()\" instead of the unadorned calls they use now.\n>>> +1\n>>>\n>>>> 2. There are two constructors, new() and get_new_node(). AFAICT nothing\n>>>> in our tests uses new(), and they almost certainly shouldn't anyway.\n>>>> get_new_node() calls new() to do some work, and I'd like to merge these\n>>>> two. The name of a constructor in perl is conventionally \"new\" as it is\n>>>> in many other OO languages, although in perl this can't apply where a\n>>>> class provides more than one constructor. Still, if we're merging them\n>>>> then the preference would be to call the merged function \"new\". Since\n>>>> we'd proposing to modify the calls anyway (see item 1) this shouldn't\n>>>> impose a huge extra workload.\n>>> +1 on \"new\". I think we weren't 100% clear on where we wanted it to go\n>>> initially, but it's now clear that get_new_node() is the constructor,\n>>> and that new() is merely a helper. So let's rename them in a sane way.\n>>>\n>>>> Another item that needs looking at is the consistent use of Carp.\n>>>> PostgresNode, TestLib and RecursiveCopy all use the Carp module, but\n>>>> contain numerous calls to \"die\" where they should probably have calls to\n>>>> \"croak\" or \"confess\".\n>>> I wonder if it would make sense to think of PostgresNode as a feeder of\n>>> sorts to Test::More and the like, so make it use diag(), note(),\n>>> explain().\n>>>\n>>\n>>\n>> Here is a set of small(ish) patches that does most of the above and then\n>> some.\n>>\n>>\n>> Patch 1 adds back the '-w' flag to pg_ctl in the start() method. It's\n>> redundant on modern versions of Postgres but it's harmless, and helps\n>> with subclassing for older versions where it wasn't the default.\n>>\n>> Patch 2 adds a method for altering config files as opposed to just\n>> appending to them. Again, this helps a lot in subclassing for older\n>> versions, which can call the parent's init() and then adjust whatever\n>> doesn't work.\n>>\n>> Patch 3 unifies the constructor methods and stops exporting a\n>> constructor. There is one constructor: PostgresNode::new()\n>>\n>> Patch 4 removes what's left of Exporter in PostgresNode, so it becomes a\n>> pure OO style module.\n>>\n>> Patch 5 adds a method for getting the major version string from a\n>> PostgresVersion object, again useful in subclassing.\n>>\n>> Patch 6 adds a method for getting the install_path of a PostgresNode\n>> object. While not strictly necessary it's consistent with other fields\n>> that have getter methods. Clients should not pry into the internals of\n>> objects. Experience has shown this method to be useful.\n>>\n>> Patches 7 8 and 9 contain additions to Patch 3 for things that I\n>> overlooked or that were not present when I originally prepared the\n>> patches. They would be applied alongside Patch 3, not separately.\n>>\n>>\n>>\n>> These patches are easily broken by e.g. the addition of a new TAP test\n>> or the modification of an existing test. So I'm hoping to get these\n>> added soon. I will add this email to the CF.\n>>\n>>\n>\n> New version with a small change to fix bitrot.\n>\n>\n\n\nNew set with fixups incorporated and review comments attended to. I'm\nintending to apply this later this week.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sun, 18 Jul 2021 11:48:10 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: cleaning up PostgresNode.pm" }, { "msg_contents": "On 7/18/21 11:48 AM, Andrew Dunstan wrote:\n> On 7/16/21 3:32 PM, Andrew Dunstan wrote:\n>> On 6/28/21 1:02 PM, Andrew Dunstan wrote:\n>>> On 4/24/21 3:14 PM, Alvaro Herrera wrote:\n>>>> On 2021-Apr-24, Andrew Dunstan wrote:\n>>>>\n>>>>> I would like to undertake some housekeeping on PostgresNode.pm.\n>>>>>\n>>>>> 1. OO modules in perl typically don't export anything. We should remove\n>>>>> the export settings. That would mean that clients would have to call\n>>>>> \"PostgresNode->get_new_node()\" (but see item 2) and\n>>>>> \"PostgresNode::get_free_port()\" instead of the unadorned calls they use now.\n>>>> +1\n>>>>\n>>>>> 2. There are two constructors, new() and get_new_node(). AFAICT nothing\n>>>>> in our tests uses new(), and they almost certainly shouldn't anyway.\n>>>>> get_new_node() calls new() to do some work, and I'd like to merge these\n>>>>> two. The name of a constructor in perl is conventionally \"new\" as it is\n>>>>> in many other OO languages, although in perl this can't apply where a\n>>>>> class provides more than one constructor. Still, if we're merging them\n>>>>> then the preference would be to call the merged function \"new\". Since\n>>>>> we'd proposing to modify the calls anyway (see item 1) this shouldn't\n>>>>> impose a huge extra workload.\n>>>> +1 on \"new\". I think we weren't 100% clear on where we wanted it to go\n>>>> initially, but it's now clear that get_new_node() is the constructor,\n>>>> and that new() is merely a helper. So let's rename them in a sane way.\n>>>>\n>>>>> Another item that needs looking at is the consistent use of Carp.\n>>>>> PostgresNode, TestLib and RecursiveCopy all use the Carp module, but\n>>>>> contain numerous calls to \"die\" where they should probably have calls to\n>>>>> \"croak\" or \"confess\".\n>>>> I wonder if it would make sense to think of PostgresNode as a feeder of\n>>>> sorts to Test::More and the like, so make it use diag(), note(),\n>>>> explain().\n>>>>\n>>>\n>>> Here is a set of small(ish) patches that does most of the above and then\n>>> some.\n>>>\n>>>\n>>> Patch 1 adds back the '-w' flag to pg_ctl in the start() method. It's\n>>> redundant on modern versions of Postgres but it's harmless, and helps\n>>> with subclassing for older versions where it wasn't the default.\n>>>\n>>> Patch 2 adds a method for altering config files as opposed to just\n>>> appending to them. Again, this helps a lot in subclassing for older\n>>> versions, which can call the parent's init() and then adjust whatever\n>>> doesn't work.\n>>>\n>>> Patch 3 unifies the constructor methods and stops exporting a\n>>> constructor. There is one constructor: PostgresNode::new()\n>>>\n>>> Patch 4 removes what's left of Exporter in PostgresNode, so it becomes a\n>>> pure OO style module.\n>>>\n>>> Patch 5 adds a method for getting the major version string from a\n>>> PostgresVersion object, again useful in subclassing.\n>>>\n>>> Patch 6 adds a method for getting the install_path of a PostgresNode\n>>> object. While not strictly necessary it's consistent with other fields\n>>> that have getter methods. Clients should not pry into the internals of\n>>> objects. Experience has shown this method to be useful.\n>>>\n>>> Patches 7 8 and 9 contain additions to Patch 3 for things that I\n>>> overlooked or that were not present when I originally prepared the\n>>> patches. They would be applied alongside Patch 3, not separately.\n>>>\n>>>\n>>>\n>>> These patches are easily broken by e.g. the addition of a new TAP test\n>>> or the modification of an existing test. So I'm hoping to get these\n>>> added soon. I will add this email to the CF.\n>>>\n>>>\n>> New version with a small change to fix bitrot.\n>>\n>>\n>\n> New set with fixups incorporated and review comments attended to. I'm\n> intending to apply this later this week.\n>\n\n\nThis time without a missing comma.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sun, 18 Jul 2021 14:19:10 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: cleaning up PostgresNode.pm" } ]
[ { "msg_contents": "Hi,\n\nWhile doing some sanity checks on the regression tests, I found some queries\nthat are semantically different but end up with identical query_id.\n\nTwo are an old issues:\n\n- the \"ONLY\" in FROM [ONLY] isn't hashed\n- the agglevelsup field in GROUPING isn't hashed\n\nAnother one was introduced in pg13 with the WITH TIES not being hashed.\n\nThe last one new in pg14: the \"DISTINCT\" in \"GROUP BY [DISTINCT]\" isn't hash.\n\nI'm attaching a patch that fixes those, with regression tests to reproduce each\nproblem.\n\nThere are also 2 additional debatable cases on whether this is a semantic\ndifference or not:\n\n- aliases aren't hashed. That's usually not a problem, except when you use\n row_to_json(), since you'll get different keys\n\n- the NAME in XmlExpr (eg: xmlpi(NAME foo,...)) isn't hashed, so you generate\n different elements", "msg_date": "Sun, 25 Apr 2021 16:11:19 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Some oversights in query_id calculation" }, { "msg_contents": "Hi Julien,\n\n> I'm attaching a patch that fixes those, with regression tests to reproduce each\n> problem.\n\nI believe something could be not quite right with the patch. Here is what I did:\n\n$ git apply ...\n# revert the changes in the code but keep the new tests\n$ git checkout src/backend/utils/misc/queryjumble.c\n$ ./full-build.sh && single-install.sh && make installcheck-world\n\n... where named .sh scripts are something I use to quickly check a patch [1].\n\nI was expecting that several tests will fail but they didn't. Maybe I\nmissed something?\n\n[1]: https://github.com/afiskon/pgscripts\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 28 Apr 2021 13:19:36 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Some oversights in query_id calculation" }, { "msg_contents": "Hi Aleksander,\n\nOn Wed, Apr 28, 2021 at 01:19:36PM +0300, Aleksander Alekseev wrote:\n> Hi Julien,\n> \n> > I'm attaching a patch that fixes those, with regression tests to reproduce each\n> > problem.\n> \n> I believe something could be not quite right with the patch. Here is what I did:\n> \n> $ git apply ...\n> # revert the changes in the code but keep the new tests\n> $ git checkout src/backend/utils/misc/queryjumble.c\n> $ ./full-build.sh && single-install.sh && make installcheck-world\n> \n> ... where named .sh scripts are something I use to quickly check a patch [1].\n> \n> I was expecting that several tests will fail but they didn't. Maybe I\n> missed something?\n\nI think it's because installcheck-* don't run pg_stat_statements' tests, see\nits Makefile:\n\n> # Disabled because these tests require \"shared_preload_libraries=pg_stat_statements\",\n> # which typical installcheck users do not have (e.g. buildfarm clients).\n> NO_INSTALLCHECK = 1\n\nYou should see failures doing a check-world or simply a make -C\ncontrib/pg_stat_statements check\n\n\n", "msg_date": "Wed, 28 Apr 2021 18:27:41 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Some oversights in query_id calculation" }, { "msg_contents": "Hi Julien,\n\n> You should see failures doing a check-world or simply a make -C\n> contrib/pg_stat_statements check\n\nSorry, my bad. I was running make check-world, but did it with -j4 flag\nwhich was a mistake.\n\nThe patch is OK.\n\n\nOn Wed, Apr 28, 2021 at 1:27 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Hi Aleksander,\n>\n> On Wed, Apr 28, 2021 at 01:19:36PM +0300, Aleksander Alekseev wrote:\n> > Hi Julien,\n> >\n> > > I'm attaching a patch that fixes those, with regression tests to\n> reproduce each\n> > > problem.\n> >\n> > I believe something could be not quite right with the patch. Here is\n> what I did:\n> >\n> > $ git apply ...\n> > # revert the changes in the code but keep the new tests\n> > $ git checkout src/backend/utils/misc/queryjumble.c\n> > $ ./full-build.sh && single-install.sh && make installcheck-world\n> >\n> > ... where named .sh scripts are something I use to quickly check a patch\n> [1].\n> >\n> > I was expecting that several tests will fail but they didn't. Maybe I\n> > missed something?\n>\n> I think it's because installcheck-* don't run pg_stat_statements' tests,\n> see\n> its Makefile:\n>\n> > # Disabled because these tests require\n> \"shared_preload_libraries=pg_stat_statements\",\n> > # which typical installcheck users do not have (e.g. buildfarm clients).\n> > NO_INSTALLCHECK = 1\n>\n> You should see failures doing a check-world or simply a make -C\n> contrib/pg_stat_statements check\n>\n\n\n-- \nBest regards,\nAleksander Alekseev\n\nHi Julien,> You should see failures doing a check-world or simply a make -C> contrib/pg_stat_statements checkSorry, my bad. I was running make check-world, but did it with -j4 flag which was a mistake.The patch is OK.On Wed, Apr 28, 2021 at 1:27 PM Julien Rouhaud <rjuju123@gmail.com> wrote:Hi Aleksander,\n\nOn Wed, Apr 28, 2021 at 01:19:36PM +0300, Aleksander Alekseev wrote:\n> Hi Julien,\n> \n> > I'm attaching a patch that fixes those, with regression tests to reproduce each\n> > problem.\n> \n> I believe something could be not quite right with the patch. Here is what I did:\n> \n> $ git apply ...\n> # revert the changes in the code but keep the new tests\n> $ git checkout src/backend/utils/misc/queryjumble.c\n> $ ./full-build.sh && single-install.sh && make installcheck-world\n> \n> ... where named .sh scripts are something I use to quickly check a patch [1].\n> \n> I was expecting that several tests will fail but they didn't. Maybe I\n> missed something?\n\nI think it's because installcheck-* don't run pg_stat_statements' tests, see\nits Makefile:\n\n> # Disabled because these tests require \"shared_preload_libraries=pg_stat_statements\",\n> # which typical installcheck users do not have (e.g. buildfarm clients).\n> NO_INSTALLCHECK = 1\n\nYou should see failures doing a check-world or simply a make -C\ncontrib/pg_stat_statements check\n-- Best regards,Aleksander Alekseev", "msg_date": "Wed, 28 Apr 2021 15:22:39 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Some oversights in query_id calculation" }, { "msg_contents": "Hi Aleksander,\n\nOn Wed, Apr 28, 2021 at 03:22:39PM +0300, Aleksander Alekseev wrote:\n> Hi Julien,\n> \n> > You should see failures doing a check-world or simply a make -C\n> > contrib/pg_stat_statements check\n> \n> Sorry, my bad. I was running make check-world, but did it with -j4 flag\n> which was a mistake.\n> \n> The patch is OK.\n\nThanks for reviewing!\n\n\n", "msg_date": "Sun, 2 May 2021 12:27:37 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Some oversights in query_id calculation" }, { "msg_contents": "On Sun, May 2, 2021 at 12:27:37PM +0800, Julien Rouhaud wrote:\n> Hi Aleksander,\n> \n> On Wed, Apr 28, 2021 at 03:22:39PM +0300, Aleksander Alekseev wrote:\n> > Hi Julien,\n> > \n> > > You should see failures doing a check-world or simply a make -C\n> > > contrib/pg_stat_statements check\n> > \n> > Sorry, my bad. I was running make check-world, but did it with -j4 flag\n> > which was a mistake.\n> > \n> > The patch is OK.\n> \n> Thanks for reviewing!\n\nPatch applied, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 3 May 2021 14:59:42 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Some oversights in query_id calculation" }, { "msg_contents": "On Mon, May 03, 2021 at 02:59:42PM -0400, Bruce Momjian wrote:\n> On Sun, May 2, 2021 at 12:27:37PM +0800, Julien Rouhaud wrote:\n> > Hi Aleksander,\n> > \n> > On Wed, Apr 28, 2021 at 03:22:39PM +0300, Aleksander Alekseev wrote:\n> > > Hi Julien,\n> > > \n> > > > You should see failures doing a check-world or simply a make -C\n> > > > contrib/pg_stat_statements check\n> > > \n> > > Sorry, my bad. I was running make check-world, but did it with -j4 flag\n> > > which was a mistake.\n> > > \n> > > The patch is OK.\n> > \n> > Thanks for reviewing!\n> \n> Patch applied, thanks.\n\nThanks a lot Bruce!\n\n\n", "msg_date": "Wed, 5 May 2021 08:33:36 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Some oversights in query_id calculation" } ]
[ { "msg_contents": "We had a report [1] of a case where a broken client application\nsent some garbage to the server, which then hung up because it\nmisinterpreted the garbage as a very long message length, and was\nsitting waiting for data that would never arrive. There is already\na sanity check on the message type byte in postgres.c's SocketBackend\n(which the trouble case accidentally got past because 'S' is a valid\ntype code); but the only check on the message length is that it be\nat least 4.\n\npq_getmessage() does have the ability to enforce an upper limit on\nmessage length, but we only use that capability for authentication\nmessages, and not entirely consistently even there.\n\nMeanwhile on the client side, libpq has had simple message-length\nsanity checking for ages: it disbelieves message lengths greater\nthan 30000 bytes unless the message type is one of a short list\nof types that can be long.\n\nSo the attached proposed patch changes things to make it required\nnot optional for callers of pq_getmessage to provide an upper length\nbound, and installs the same sort of short-vs-long message heuristic\nas libpq has in the server. This is also a good opportunity for\nother callers to absorb the lesson SocketBackend learned many years\nago: we should validate the message type code *before* believing\nanything about the message length. All of this is just heuristic\nof course, but I think it makes for a substantial reduction in the\ntrouble surface.\n\nGiven the small number of complaints to date, I'm hesitant to\nback-patch this: if there's anybody out there with valid use for\nlong messages that I didn't think should be long, this might break\nthings for them. But I think it'd be reasonable to sneak it into\nv14, since we've not started beta yet.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/YIKCCXcEozx9iiBU%40c720-r368166.fritz.box", "msg_date": "Sun, 25 Apr 2021 13:51:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Better sanity checking for message length words" }, { "msg_contents": "Hi Tom,\n\n> ...\n> Given the small number of complaints to date, I'm hesitant to\n> back-patch this: if there's anybody out there with valid use for\n> long messages that I didn't think should be long, this might break\n> things for them. But I think it'd be reasonable to sneak it into\n> v14, since we've not started beta yet.\n>\n> Thoughts?\n\nI'm having slight issues applying your patch to the `master` branch.\nIs it the right target?\n\nRegarding the idea, I think extra checks are a good thing and\ndefinitely something that can be introduced in the next major version.\nIf we receive a complaint during beta-testing we can revert the patch\nor maybe increase the limit for small messages.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 27 Apr 2021 13:29:02 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Better sanity checking for message length words" }, { "msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n> I'm having slight issues applying your patch to the `master` branch.\n> Is it the right target?\n\n[ scratches head ... ] The patch still applies perfectly cleanly\nfor me, using either \"patch\" or \"git apply\".\n\n> Regarding the idea, I think extra checks are a good thing and\n> definitely something that can be introduced in the next major version.\n> If we receive a complaint during beta-testing we can revert the patch\n> or maybe increase the limit for small messages.\n\nActually, I did some more testing yesterday and found that\n\"make check-world\" passes with PQ_SMALL_MESSAGE_LIMIT values\nas small as 16. That may say more about our testing than\nanything else --- for example, it implies we're not using long\nstatement or portal names anywhere. But still, it suggests\nthat 30000 is between one and two orders of magnitude too large.\nI'm now thinking that 10000 would be a good conservative setting,\nor we could try 1000 if we want to be a bit more aggressive.\nAs you say, beta-testing feedback could result in further\nmodifications.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 27 Apr 2021 10:38:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Better sanity checking for message length words" }, { "msg_contents": "Hi Tom,\n\n> scratches head ... ] The patch still applies perfectly cleanly\n> for me, using either \"patch\" or \"git apply\".\n\nSorry, my bad. It was about lines separating on different platforms. The\npatch is fine and passes installcheck-world on MacOS.\n\n-- \nBest regards,\nAleksander Alekseev\n\nHi Tom,> scratches head ... ]  The patch still applies perfectly cleanly> for me, using either \"patch\" or \"git apply\".Sorry, my bad. It was about lines separating on different platforms. The patch is fine and passes installcheck-world on MacOS.-- Best regards,Aleksander Alekseev", "msg_date": "Wed, 28 Apr 2021 11:40:27 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Better sanity checking for message length words" }, { "msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n> Sorry, my bad. It was about lines separating on different platforms. The\n> patch is fine and passes installcheck-world on MacOS.\n\nPushed, thanks for looking at it!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 28 Apr 2021 15:51:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Better sanity checking for message length words" } ]
[ { "msg_contents": "Hi,\n\nOn twitter it was mentioned [1] that gist index builds spend a lot of time\nin FunctionCall3Coll. Which could be addressed to a good degree by not\nusing FunctionCall3Coll() which needs to call InitFunctionCallInfoData()\nevery time, but instead doing so once by including the FunctionCallInfo\nin GISTSTATE.\n\nWhich made me look at GISTSTATEs layout. And, uh, I was a bit shocked:\nstruct GISTSTATE {\n MemoryContext scanCxt; /* 0 8 */\n MemoryContext tempCxt; /* 8 8 */\n TupleDesc leafTupdesc; /* 16 8 */\n TupleDesc nonLeafTupdesc; /* 24 8 */\n TupleDesc fetchTupdesc; /* 32 8 */\n FmgrInfo consistentFn[32]; /* 40 1536 */\n /* --- cacheline 24 boundary (1536 bytes) was 40 bytes ago --- */\n FmgrInfo unionFn[32]; /* 1576 1536 */\n...\n /* --- cacheline 216 boundary (13824 bytes) was 40 bytes ago --- */\n Oid supportCollation[32]; /* 13864 128 */\n\n /* size: 13992, cachelines: 219, members: 15 */\n /* last cacheline: 40 bytes */\n};\n\nSo the basic GISTSTATE is 14kB large. And all the information needed to\ncall support functions for one attribute is spaced so far appart that\nit's guaranteed to be on different cachelines and to be very unlikely to\nbe prefetched by the hardware prefetcher.\n\nIt seems pretty clear that this should be changed to be something more\nlike\n\ntypedef struct GIST_COL_STATE\n{\n\tFmgrInfo\tconsistentFn;\n\tFmgrInfo\tunionFn;\n\tFmgrInfo\tcompressFn;\n\tFmgrInfo\tdecompressFn;\n\tFmgrInfo\tpenaltyFn;\n\tFmgrInfo\tpicksplitFn;\n\tFmgrInfo\tequalFn;\n\tFmgrInfo\tdistanceFn;\n\tFmgrInfo\tfetchFn;\n\n\t/* Collations to pass to the support functions */\n\tOid\t\t\tsupportCollation;\n} GIST_COL_STATE;\n\ntypedef struct GISTSTATE\n{\n\tMemoryContext scanCxt;\t\t/* context for scan-lifespan data */\n\tMemoryContext tempCxt;\t\t/* short-term context for calling functions */\n\n\tTupleDesc\tleafTupdesc;\t/* index's tuple descriptor */\n\tTupleDesc\tnonLeafTupdesc; /* truncated tuple descriptor for non-leaf\n\t\t\t\t\t\t\t\t * pages */\n\tTupleDesc\tfetchTupdesc;\t/* tuple descriptor for tuples returned in an\n\t\t\t\t\t\t\t\t * index-only scan */\n\n GIST_COL_STATE column_state[FLEXIBLE_ARRAY_MEMBER];\n}\n\nwith initGISTstate allocating based on\nIndexRelationGetNumberOfKeyAttributes() instead of using a constant.\n\nAnd then subsequently change GIST_COL_STATE to embed the\nFunctionCallInfo, rather than initializiing them on the stack for every\ncall.\n\n\nI'm not planning on doing this work, but I thought it's sensible to send\nto the list anyway.\n\nGreetings,\n\nAndres Freund\n\n[1] https://twitter.com/komzpa/status/1386420422225240065\n\n\n", "msg_date": "Sun, 25 Apr 2021 15:20:53 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "GISTSTATE is too large" }, { "msg_contents": "\n\n> 26 апр. 2021 г., в 03:20, Andres Freund <andres@anarazel.de> написал(а):\n> \n> So the basic GISTSTATE is 14kB large. And all the information needed to\n> call support functions for one attribute is spaced so far appart that\n> it's guaranteed to be on different cachelines and to be very unlikely to\n> be prefetched by the hardware prefetcher.\n> \n> It seems pretty clear that this should be changed to be something more\n> like\n> \n> ...\n> \n> with initGISTstate allocating based on\n> IndexRelationGetNumberOfKeyAttributes() instead of using a constant.\n> \n> And then subsequently change GIST_COL_STATE to embed the\n> FunctionCallInfo, rather than initializiing them on the stack for every\n> call.\nYes, this makes sense. Also, it's viable to reorder fields to group scan and insert routines together, currently they are interlaced.\nOr maybe we could even split state into insert state and scan state.\n\n\n> I'm not planning on doing this work, but I thought it's sensible to send\n> to the list anyway.\n\nThanks for idea, I would give it a shot this summer, unless someone else will take it earlier.\nBTW, It would make sense to avoid penalty call at all: there are many GiST-based access methods that want to see all items together to choose insertion subtree (e.g. R*-tree and RR-tree). Calling penalty function for each tuple on page often is not a good idea at all.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Mon, 26 Apr 2021 10:11:13 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: GISTSTATE is too large" }, { "msg_contents": "On 4/26/21 12:20 AM, Andres Freund wrote:\n> It seems pretty clear that this should be changed to be something more\n> like\n> \n> [...]\n> \n> with initGISTstate allocating based on\n> IndexRelationGetNumberOfKeyAttributes() instead of using a constant.\n> \n> And then subsequently change GIST_COL_STATE to embed the\n> FunctionCallInfo, rather than initializiing them on the stack for every\n> call.\n> \n> \n> I'm not planning on doing this work, but I thought it's sensible to send\n> to the list anyway.\n\nI did the first part since it seemed easy enough and an obvious win for \nall workloads.\n\nAndreas", "msg_date": "Sun, 30 May 2021 15:14:33 +0200", "msg_from": "Andreas Karlsson <andreas@proxel.se>", "msg_from_op": false, "msg_subject": "Re: GISTSTATE is too large" }, { "msg_contents": "On Sun, May 30, 2021 at 6:14 AM Andreas Karlsson <andreas@proxel.se> wrote:\n\n> On 4/26/21 12:20 AM, Andres Freund wrote:\n> > It seems pretty clear that this should be changed to be something more\n> > like\n> >\n> > [...]\n> >\n> > with initGISTstate allocating based on\n> > IndexRelationGetNumberOfKeyAttributes() instead of using a constant.\n> >\n> > And then subsequently change GIST_COL_STATE to embed the\n> > FunctionCallInfo, rather than initializiing them on the stack for every\n> > call.\n> >\n> >\n> > I'm not planning on doing this work, but I thought it's sensible to send\n> > to the list anyway.\n>\n> I did the first part since it seemed easy enough and an obvious win for\n> all workloads.\n>\n> Andreas\n>\n\nHi,\nMinor comment:\n\n+ /* Collations to pass to the support functions */\n+ Oid supportCollation;\n\n Collations -> Collation\nThe field used to be an array. Now it is one Oid.\n\nCheers\n\nOn Sun, May 30, 2021 at 6:14 AM Andreas Karlsson <andreas@proxel.se> wrote:On 4/26/21 12:20 AM, Andres Freund wrote:\n> It seems pretty clear that this should be changed to be something more\n> like\n> \n> [...]\n> \n> with initGISTstate allocating based on\n> IndexRelationGetNumberOfKeyAttributes() instead of using a constant.\n> \n> And then subsequently change GIST_COL_STATE to embed the\n> FunctionCallInfo, rather than initializiing them on the stack for every\n> call.\n> \n> \n> I'm not planning on doing this work, but I thought it's sensible to send\n> to the list anyway.\n\nI did the first part since it seemed easy enough and an obvious win for \nall workloads.\n\nAndreasHi,Minor comment:+   /* Collations to pass to the support functions */+   Oid         supportCollation; Collations -> CollationThe field used to be an array. Now it is one Oid.Cheers", "msg_date": "Sun, 30 May 2021 07:21:28 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: GISTSTATE is too large" }, { "msg_contents": "Hi,\n\nOn 2021-05-30 15:14:33 +0200, Andreas Karlsson wrote:\n> I did the first part since it seemed easy enough and an obvious win for all\n> workloads.\n\nCool!\n\n\n> +typedef struct GIST_COL_STATE\n> +{\n> +\tFmgrInfo\tconsistentFn;\n> +\tFmgrInfo\tunionFn;\n> +\tFmgrInfo\tcompressFn;\n> +\tFmgrInfo\tdecompressFn;\n> +\tFmgrInfo\tpenaltyFn;\n> +\tFmgrInfo\tpicksplitFn;\n> +\tFmgrInfo\tequalFn;\n> +\tFmgrInfo\tdistanceFn;\n> +\tFmgrInfo\tfetchFn;\n> +\n> +\t/* Collations to pass to the support functions */\n> +\tOid\t\t\tsupportCollation;\n> +} GIST_COL_STATE;\n> +\n> /*\n> * GISTSTATE: information needed for any GiST index operation\n> *\n> @@ -83,18 +99,7 @@ typedef struct GISTSTATE\n> \tTupleDesc\tfetchTupdesc;\t/* tuple descriptor for tuples returned in an\n> \t\t\t\t\t\t\t\t * index-only scan */\n> \n> -\tFmgrInfo\tconsistentFn[INDEX_MAX_KEYS];\n> -\tFmgrInfo\tunionFn[INDEX_MAX_KEYS];\n> -\tFmgrInfo\tcompressFn[INDEX_MAX_KEYS];\n> -\tFmgrInfo\tdecompressFn[INDEX_MAX_KEYS];\n> -\tFmgrInfo\tpenaltyFn[INDEX_MAX_KEYS];\n> -\tFmgrInfo\tpicksplitFn[INDEX_MAX_KEYS];\n> -\tFmgrInfo\tequalFn[INDEX_MAX_KEYS];\n> -\tFmgrInfo\tdistanceFn[INDEX_MAX_KEYS];\n> -\tFmgrInfo\tfetchFn[INDEX_MAX_KEYS];\n> -\n> -\t/* Collations to pass to the support functions */\n> -\tOid\t\t\tsupportCollation[INDEX_MAX_KEYS];\n> +\tGIST_COL_STATE column_state[FLEXIBLE_ARRAY_MEMBER];\n> } GISTSTATE;\n\nThis makes me wonder if the better design would be to keep the layout of\ndense arrays for each type of function, but to make it more dense by\nallocating dynamically. As above GIST_COL_STATE is 436 bytes (I think),\ni.e. *well* above a cache line - far enough apart that accessing\ndifferent column's equalFn or such will be hard for the hardware\nprefetcher to understand. Presumably not all functions are accessed all\nthe time.\n\nSo we could lay it out as\n\nstruct GISTSTATE\n{\n...\n FmgrInfo *consistentFn;\n FmgrInfo *unionFn;\n...\n}\n[ncolumns consistentFns follow]\n[ncolumns unionFn's follow]\n\nWhich'd likely end up with better cache locality for gist indexes with a\nfew columns. At the expense of a pointer indirection, of course.\n\n\nAnother angle: I don't know how it is for GIST, but for btree, the\nFunctionCall2Coll() etc overhead shows up significantly - directly\nallocating the FunctionCallInfo and initializing it once, instead of\nevery call, reduces overhead noticeably (but is a bit complicated to\nimplement, due to the insertion scan and stuff). I'd be surprised if we\ndidn't get better performance for gist if it had initialized-once\nFunctionCallInfos intead of the FmgrInfos.\n\nAnd that's not even just true because of needing to re-initialize\nFunctionCallInfo on every call, but also because the function call\nitself rarely accesses the data from the FmgrInfo, but always accesses\nthe FunctionCallInfo. And a FunctionCallInfos with 1 argument is the\nsame size as a FmgrInfo, with 2 it's 16 bytes more. So storing the\nonce-initialized FunctionCallInfo results in considerably better\nlocality, by not having all the FmgrInfos in cache.\n\nOne annoying bit: Right now it's not trivial to declare arrays of\nspecific-number-of-arguments FunctionCallInfos. See the uglyness of\nLOCAL_FCINFO. I think we should consider having a bunch of typedefs for\n1..3 argument FunctionCallInfo structs to make that easier. Probably\nwould still need union trickery, but it'd not be *too* bad I think.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 30 May 2021 13:34:39 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: GISTSTATE is too large" } ]
[ { "msg_contents": "The comments for rewriteTargetListIU say (or said until earlier today)\n\n * 2. For an UPDATE on a trigger-updatable view, add tlist entries for any\n * unassigned-to attributes, assigning them their old values. These will\n * later get expanded to the output values of the view. (This is equivalent\n * to what the planner's expand_targetlist() will do for UPDATE on a regular\n * table, but it's more convenient to do it here while we still have easy\n * access to the view's original RT index.) This is only necessary for\n * trigger-updatable views, for which the view remains the result relation of\n * the query. For auto-updatable views we must not do this, since it might\n * add assignments to non-updatable view columns. For rule-updatable views it\n * is unnecessary extra work, since the query will be rewritten with a\n * different result relation which will be processed when we recurse via\n * RewriteQuery.\n\nI noticed that this is referencing something that, in fact,\nexpand_targetlist() doesn't do anymore, so that this is a poor\njustification for the behavior. My first thought was that we still\nneed to do it to produce the correct row contents for the INSTEAD OF\ntrigger, so I updated the comment (in 08a986966) to claim that.\n\nHowever, on closer inspection, that's nonsense. nodeModifyTable.c\npopulates the trigger \"OLD\" row from the whole-row variable that is\ngenerated for the view, and then it computes the \"NEW\" row using\nthat old row and the UPDATE tlist; there is no need there for the\nUPDATE tlist to compute all the columns. The regression tests still\npass just fine if we take out the questionable logic (cf. attached\npatch). Moreover, if you poke into it a little closer than the tests\ndo, you notice that the existing code is uselessly computing non-updated\ncolumns twice, in both the extra tlist item and the whole-row variable.\nAs an example, consider\n\ncreate table base (a int, b int);\ncreate view v1 as select a+1 as a1, b+2 as b2 from base;\n-- you also need an INSTEAD OF UPDATE trigger, not shown here\nexplain verbose update v1 set a1 = a1 - 44;\n\nWith HEAD you get\n\n Update on public.v1 (cost=0.00..60.85 rows=0 width=0)\n -> Seq Scan on public.base (cost=0.00..60.85 rows=2260 width=46)\n Output: ((base.a + 1) - 44), (base.b + 2), ROW((base.a + 1), (base.b + 2)), base.ctid\n\nThere's really no need to compute base.b + 2 twice, and with this\npatch we don't:\n\n Update on public.v1 (cost=0.00..55.20 rows=0 width=0)\n -> Seq Scan on public.base (cost=0.00..55.20 rows=2260 width=42)\n Output: ((base.a + 1) - 44), ROW((base.a + 1), (base.b + 2)), base.ctid\n\n\nI would think that this is a totally straightforward improvement,\nbut there's one thing in the comments for rewriteTargetListIU that\ngives me a little pause: it says\n\n * We must do items 1,2,3 before firing rewrite rules, else rewritten\n * references to NEW.foo will produce wrong or incomplete results.\n\nAs far as I can tell, though, references to NEW values still do the\nright thing. I'm not certain whether any of the existing regression\ntests really cover this point, but experimenting with the scenario shown\nin the attached SQL file says that the DO ALSO rule gets the right\nresults. It's possible that the expansion sequence is a bit different\nthan before, but we still end up with the right answer.\n\nSo, as far as I can tell, this is an oversight in 86dc90056 and we\nought to clean it up as attached.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 25 Apr 2021 20:40:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Does rewriteTargetListIU still need to add UPDATE tlist entries?" }, { "msg_contents": "On Mon, Apr 26, 2021 at 9:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The comments for rewriteTargetListIU say (or said until earlier today)\n>\n> * 2. For an UPDATE on a trigger-updatable view, add tlist entries for any\n> * unassigned-to attributes, assigning them their old values. These will\n> * later get expanded to the output values of the view. (This is equivalent\n> * to what the planner's expand_targetlist() will do for UPDATE on a regular\n> * table, but it's more convenient to do it here while we still have easy\n> * access to the view's original RT index.) This is only necessary for\n> * trigger-updatable views, for which the view remains the result relation of\n> * the query. For auto-updatable views we must not do this, since it might\n> * add assignments to non-updatable view columns. For rule-updatable views it\n> * is unnecessary extra work, since the query will be rewritten with a\n> * different result relation which will be processed when we recurse via\n> * RewriteQuery.\n>\n> I noticed that this is referencing something that, in fact,\n> expand_targetlist() doesn't do anymore, so that this is a poor\n> justification for the behavior. My first thought was that we still\n> need to do it to produce the correct row contents for the INSTEAD OF\n> trigger, so I updated the comment (in 08a986966) to claim that.\n\nCheck.\n\n> However, on closer inspection, that's nonsense. nodeModifyTable.c\n> populates the trigger \"OLD\" row from the whole-row variable that is\n> generated for the view, and then it computes the \"NEW\" row using\n> that old row and the UPDATE tlist; there is no need there for the\n> UPDATE tlist to compute all the columns. The regression tests still\n> pass just fine if we take out the questionable logic (cf. attached\n> patch). Moreover, if you poke into it a little closer than the tests\n> do, you notice that the existing code is uselessly computing non-updated\n> columns twice, in both the extra tlist item and the whole-row variable.\n>\n> As an example, consider\n>\n> create table base (a int, b int);\n> create view v1 as select a+1 as a1, b+2 as b2 from base;\n> -- you also need an INSTEAD OF UPDATE trigger, not shown here\n> explain verbose update v1 set a1 = a1 - 44;\n>\n> With HEAD you get\n>\n> Update on public.v1 (cost=0.00..60.85 rows=0 width=0)\n> -> Seq Scan on public.base (cost=0.00..60.85 rows=2260 width=46)\n> Output: ((base.a + 1) - 44), (base.b + 2), ROW((base.a + 1), (base.b + 2)), base.ctid\n>\n> There's really no need to compute base.b + 2 twice, and with this\n> patch we don't:\n>\n> Update on public.v1 (cost=0.00..55.20 rows=0 width=0)\n> -> Seq Scan on public.base (cost=0.00..55.20 rows=2260 width=42)\n> Output: ((base.a + 1) - 44), ROW((base.a + 1), (base.b + 2)), base.ctid\n\nThat makes sense to me, at least logically.\n\n> I would think that this is a totally straightforward improvement,\n> but there's one thing in the comments for rewriteTargetListIU that\n> gives me a little pause: it says\n>\n> * We must do items 1,2,3 before firing rewrite rules, else rewritten\n> * references to NEW.foo will produce wrong or incomplete results.\n>\n> As far as I can tell, though, references to NEW values still do the\n> right thing. I'm not certain whether any of the existing regression\n> tests really cover this point, but experimenting with the scenario shown\n> in the attached SQL file says that the DO ALSO rule gets the right\n> results. It's possible that the expansion sequence is a bit different\n> than before, but we still end up with the right answer.\n\nI also checked what the rewriter and the planner do for the following\nDO ALSO insert:\n\ncreate rule logit as on update to v1 do also\ninsert into log values(old.a1, new.a1, old.b2, new.b2);\n\nand can see that the insert ends up with the right targetlist\nirrespective of whether or not rewriteTargetListIU() adds an item for\nNEW.b2. So, I attached a debugger to the update query in your shared\nscript and focused on how ReplaceVarsFromTargetList(), running on the\ninsert query added by the rule, handles the item for NEW.b2 no longer\nbeing added to the update's targetlist after your patch. Turns out\nthe result (the insert's targetlist) is the same even if the path\ntaken in ReplaceVarsFromTargetList_callback() is different after your\npatch.\n\nBefore:\n\nexplain verbose update v1 set a1 = a1-44;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Insert on public.log (cost=0.00..60.85 rows=0 width=0)\n -> Seq Scan on public.base (cost=0.00..60.85 rows=2260 width=16)\n Output: (base.a + 1), ((base.a + 1) - 44), (base.b + 2), (base.b + 2)\n\n Update on public.v1 (cost=0.00..60.85 rows=0 width=0)\n -> Seq Scan on public.base (cost=0.00..60.85 rows=2260 width=46)\n Output: ((base.a + 1) - 44), (base.b + 2), ROW((base.a + 1),\n(base.b + 2)), base.ctid\n(7 rows)\n\nAfter:\n\nexplain verbose update v1 set a1 = a1-44;\n QUERY PLAN\n---------------------------------------------------------------------------------\n Insert on public.log (cost=0.00..60.85 rows=0 width=0)\n -> Seq Scan on public.base (cost=0.00..60.85 rows=2260 width=16)\n Output: (base.a + 1), ((base.a + 1) - 44), (base.b + 2), (base.b + 2)\n\n Update on public.v1 (cost=0.00..55.20 rows=0 width=0)\n -> Seq Scan on public.base (cost=0.00..55.20 rows=2260 width=42)\n Output: ((base.a + 1) - 44), ROW((base.a + 1), (base.b + 2)), base.ctid\n(7 rows)\n\nI didn't however study closely why REPLACEVARS_CHANGE_VARNO does the\ncorrect thing, so am not sure if there might be cases that would be\nbroken.\n\n> So, as far as I can tell, this is an oversight in 86dc90056 and we\n> ought to clean it up as attached.\n\nThanks for noticing this and the patch. If you are confident that\nREPLACEVARS_CHANGE_VARNO covers all imaginable cases, I suppose it\nmakes sense to apply it.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 26 Apr 2021 12:54:23 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Does rewriteTargetListIU still need to add UPDATE tlist entries?" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Mon, Apr 26, 2021 at 9:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I would think that this is a totally straightforward improvement,\n>> but there's one thing in the comments for rewriteTargetListIU that\n>> gives me a little pause: it says\n>> \n>> * We must do items 1,2,3 before firing rewrite rules, else rewritten\n>> * references to NEW.foo will produce wrong or incomplete results.\n>> \n>> As far as I can tell, though, references to NEW values still do the\n>> right thing. I'm not certain whether any of the existing regression\n>> tests really cover this point, but experimenting with the scenario shown\n>> in the attached SQL file says that the DO ALSO rule gets the right\n>> results. It's possible that the expansion sequence is a bit different\n>> than before, but we still end up with the right answer.\n\n> I also checked what the rewriter and the planner do for the following\n> DO ALSO insert:\n> create rule logit as on update to v1 do also\n> insert into log values(old.a1, new.a1, old.b2, new.b2);\n> and can see that the insert ends up with the right targetlist\n> irrespective of whether or not rewriteTargetListIU() adds an item for\n> NEW.b2.\n\nThanks for looking at that. On reflection I think this must be so,\nbecause those rewriter mechanisms were designed long before we had\ntrigger-updatable views, and rewriteTargetListIU has never added\ntlist items like this for any other sort of view. So the ability\nto insert the original view output column has necessarily been there\nfrom the beginning. This is just getting rid of a weird implementation\ndifference between trigger-updatable views and other views.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Apr 2021 10:08:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Does rewriteTargetListIU still need to add UPDATE tlist entries?" }, { "msg_contents": "On Mon, 26 Apr 2021 at 15:09, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Thanks for looking at that. On reflection I think this must be so,\n> because those rewriter mechanisms were designed long before we had\n> trigger-updatable views, and rewriteTargetListIU has never added\n> tlist items like this for any other sort of view. So the ability\n> to insert the original view output column has necessarily been there\n> from the beginning. This is just getting rid of a weird implementation\n> difference between trigger-updatable views and other views.\n>\n\nFWIW, I had a look at this too and came to much the same conclusion,\nso I think this is a safe change that makes the code a little neater\nand more efficient.\n\nRegards,\nDean\n\n\n", "msg_date": "Mon, 26 Apr 2021 15:27:59 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Does rewriteTargetListIU still need to add UPDATE tlist entries?" }, { "msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> On Mon, 26 Apr 2021 at 15:09, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Thanks for looking at that. On reflection I think this must be so,\n>> because those rewriter mechanisms were designed long before we had\n>> trigger-updatable views, and rewriteTargetListIU has never added\n>> tlist items like this for any other sort of view. So the ability\n>> to insert the original view output column has necessarily been there\n>> from the beginning. This is just getting rid of a weird implementation\n>> difference between trigger-updatable views and other views.\n\n> FWIW, I had a look at this too and came to much the same conclusion,\n> so I think this is a safe change that makes the code a little neater\n> and more efficient.\n\nAgain, thanks for looking!\n\nI checked into the commit history (how'd we ever survive without \"git\nblame\"?) and found that my argument above is actually wrong in detail.\nBefore cab5dc5da of 2013-10-18, rewriteTargetListIU expanded non-updated\ncolumns for all views not only trigger-updatable ones. However, that\nbehavior itself goes back only to 2ec993a7c of 2010-10-10, which added\ntriggers on views; before that there was indeed no such expansion.\nOf course the view rewrite mechanisms are ten or so years older than\nthat, so the conclusion that they weren't designed to need this still\nstands.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Apr 2021 10:55:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Does rewriteTargetListIU still need to add UPDATE tlist entries?" }, { "msg_contents": "On Mon, 26 Apr 2021 at 15:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I checked into the commit history (how'd we ever survive without \"git\n> blame\"?) and found that my argument above is actually wrong in detail.\n> Before cab5dc5da of 2013-10-18, rewriteTargetListIU expanded non-updated\n> columns for all views not only trigger-updatable ones. However, that\n> behavior itself goes back only to 2ec993a7c of 2010-10-10, which added\n> triggers on views; before that there was indeed no such expansion.\n\nAh, that makes sense. Before cab5dc5da, expanding non-updated columns\nof auto-updatable views was safe because until then a view was only\nauto-updatable if all it's columns were. It was still unnecessary work\nthough, and with 20/20 hindsight, when triggers on views were first\nadded in 2ec993a7c, it probably should have only expanded the\ntargetlist for trigger-updatable views.\n\n> Of course the view rewrite mechanisms are ten or so years older than\n> that, so the conclusion that they weren't designed to need this still\n> stands.\n\nYeah, I think that conclusion is right. The trickiest part I found was\ndeciding whether any product queries from conditional rules would do\nthe right thing if the main trigger-updatable query no longer expands\nits targetlist. But I think that has to be OK, because even before\ntrigger-updatable views were added, it was possible to have product\nqueries from conditional rules together with an unconditional\ndo-nothing rule, so the product queries don't rely on the expanded\ntargetlist, and never have.\n\nRegards,\nDean\n\n\n", "msg_date": "Mon, 26 Apr 2021 17:42:28 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Does rewriteTargetListIU still need to add UPDATE tlist entries?" } ]
[ { "msg_contents": "Hi,\n\nPSA patch to fix a misnamed function in a comment.\n\ntypo: \"DecodePreare\" --> \"DecodePrepare\"\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 26 Apr 2021 12:29:16 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "comment typo - misnamed function" }, { "msg_contents": "On Mon, Apr 26, 2021 at 7:59 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> PSA patch to fix a misnamed function in a comment.\n>\n> typo: \"DecodePreare\" --> \"DecodePrepare\"\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 26 Apr 2021 10:20:45 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: comment typo - misnamed function" } ]
[ { "msg_contents": "Hi all,\n\n9afffcb has added the concept of authenticated identity to the\ninformation provided in log_connections for audit purposes, with this\ndata stored in each backend's port. One extra thing that can be\nreally useful for monitoring is the capability to track this\ninformation directly in pg_stat_activity.\n\nPlease find attached a patch to do that, with the following choices\nmade:\n- Like query strings, authenticated IDs could be rather long, so we\nneed a GUC to control the maximum size allocated for these in shared\nmemory. The attached uses 128 bytes by default, that should be enough\nin most cases even for DNs, LDAP or krb5.\n- Multi-byte strings need to be truncated appropriately. As a matter\nof consistency with the query string code, I have made things so as\nthe truncation is done each time a string is requested, with\nPgBackendStatus storing the raw information truncated depending on the\nmaximum size allowed at the GUC level.\n- Some tests are added within the SSL and LDAP code paths. We could\nadd more of that within the authentication and kerberos tests but that\ndid not strike me as mandatory either as the backend logs are checked\neverywhere already.\n- The new field has been added at the end of pg_stat_end_activity()\nmainly as a matter of readability. I'd rather move that just after\nthe application_name, now only pg_stat_activity does that.\n\nI am adding that to the next CF.\n\nThanks,\n--\nMichael", "msg_date": "Mon, 26 Apr 2021 11:34:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Addition of authenticated ID to pg_stat_activity" }, { "msg_contents": "On Mon, Apr 26, 2021 at 11:34:16AM +0900, Michael Paquier wrote:\n> +++ b/doc/src/sgml/config.sgml\n> @@ -7596,6 +7596,24 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry id=\"guc-track-activity-authn-size\" xreflabel=\"track_activity_authn_size\">\n> + <term><varname>track_activity_authn_size</varname> (<type>integer</type>)\n> + <indexterm>\n> + <primary><varname>track_activity_authn_size</varname> configuration parameter</primary>\n> + </indexterm>\n> + </term>\n> + <listitem>\n> + <para>\n> + Specifies the amount of memory reserved to store the text of the\n> + currently executing command for each active session, for the\n\nThat part looks to be a copy+paste error.\n\n> + <structname>pg_stat_activity</structname>.<structfield>authenticated_id</structfield> field.\n> + If this value is specified without units, it is taken as bytes.\n> + The default value is 128 bytes.\n> + This parameter can only be set at server start.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n\nI think many/most things in log/CSV should also go in PSA, and vice versa.\n\nIt seems like there should be a comment about this - in both places - to avoid\nforgetting it in the future.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 25 Apr 2021 22:14:43 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Addition of authenticated ID to pg_stat_activity" }, { "msg_contents": "On Sun, Apr 25, 2021 at 10:14:43PM -0500, Justin Pryzby wrote:\n> That part looks to be a copy+paste error.\n\nSorry about that. I have fixed that on my own branch.\n\n>> + <structname>pg_stat_activity</structname>.<structfield>authenticated_id</structfield> field.\n>> + If this value is specified without units, it is taken as bytes.\n>> + The default value is 128 bytes.\n>> + This parameter can only be set at server start.\n>> + </para>\n>> + </listitem>\n>> + </varlistentry>\n> \n> I think many/most things in log/CSV should also go in PSA, and vice versa.\n>\n> It seems like there should be a comment about this - in both places - to avoid\n> forgetting it in the future.\n\nI am not sure what you mean here, neither do I see in what this is\nrelated to what is proposed on this thread.\n--\nMichael", "msg_date": "Mon, 26 Apr 2021 18:16:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Addition of authenticated ID to pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn 2021-04-26 11:34:16 +0900, Michael Paquier wrote:\n> 9afffcb has added the concept of authenticated identity to the\n> information provided in log_connections for audit purposes, with this\n> data stored in each backend's port. One extra thing that can be\n> really useful for monitoring is the capability to track this\n> information directly in pg_stat_activity.\n\nI'm getting a bit worried about the incremental increase in\npg_stat_activity width - it's probably by far the view that's most\nviewed interactively. I think we need to be careful not to add too niche\nthings to it. This is especially true for columns that may be wider.\n\nIt'd be bad for discoverability, but perhaps something like this, that's\nnot that likely to be used interactively, would be better done as a\nseparate function that would need to be used explicitly?\n\n\nA select * from pg_stat_activity on a plain installation, connecting\nover unix socket, with nothing running, is 411 chars wide for me...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 26 Apr 2021 12:18:11 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Addition of authenticated ID to pg_stat_activity" }, { "msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2021-04-26 11:34:16 +0900, Michael Paquier wrote:\n> > 9afffcb has added the concept of authenticated identity to the\n> > information provided in log_connections for audit purposes, with this\n> > data stored in each backend's port. One extra thing that can be\n> > really useful for monitoring is the capability to track this\n> > information directly in pg_stat_activity.\n> \n> I'm getting a bit worried about the incremental increase in\n> pg_stat_activity width - it's probably by far the view that's most\n> viewed interactively. I think we need to be careful not to add too niche\n> things to it. This is especially true for columns that may be wider.\n> \n> It'd be bad for discoverability, but perhaps something like this, that's\n> not that likely to be used interactively, would be better done as a\n> separate function that would need to be used explicitly?\n\nI mean.. we already have separate functions and views for this, though\nthey're auth-method-specific currently, but also provide more details,\nsince it isn't actually a \"one size fits all\" kind of thing like this\nentire approach is imagining it to be.\n\nThanks,\n\nStephen", "msg_date": "Mon, 26 Apr 2021 15:21:46 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Addition of authenticated ID to pg_stat_activity" }, { "msg_contents": "On Mon, Apr 26, 2021 at 03:21:46PM -0400, Stephen Frost wrote:\n> * Andres Freund (andres@anarazel.de) wrote:\n>> I'm getting a bit worried about the incremental increase in\n>> pg_stat_activity width - it's probably by far the view that's most\n>> viewed interactively. I think we need to be careful not to add too niche\n>> things to it. This is especially true for columns that may be wider.\n>> \n>> It'd be bad for discoverability, but perhaps something like this, that's\n>> not that likely to be used interactively, would be better done as a\n>> separate function that would need to be used explicitly?\n> \n> I mean.. we already have separate functions and views for this, though\n> they're auth-method-specific currently, but also provide more details,\n> since it isn't actually a \"one size fits all\" kind of thing like this\n> entire approach is imagining it to be.\n\nReferring to pg_stat_ssl and pg_stat_gssapi here, right? Yes, that\nwould be very limited as this leads to no visibility for LDAP, all\npassword-based authentications and more.\n\nI am wondering if we should take this as an occasion to move some data\nout of pg_stat_activity into a separate biew, dedicated to the data\nrelated to the connection that remains set to the same value for the\nduration of a backend's life, as of the following set:\n- the backend PID\n- client_addr\n- client_hostname\n- client_port\n- authenticated ID\n- application_name? (well, this one could change on reload, so I am\nlying).\n\nIt would be tempting to move the database name and the username but\nthese are popular fields with monitoring. Maybe we could name that\npg_stat_connection_status, pg_stat_auth_status or just\npg_stat_connection?\n--\nMichael", "msg_date": "Tue, 27 Apr 2021 09:59:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Addition of authenticated ID to pg_stat_activity" }, { "msg_contents": "On Tue, Apr 27, 2021 at 09:59:18AM +0900, Michael Paquier wrote:\n> \n> I am wondering if we should take this as an occasion to move some data\n> out of pg_stat_activity into a separate biew, dedicated to the data\n> related to the connection that remains set to the same value for the\n> duration of a backend's life, as of the following set:\n> - the backend PID\n\n-1. It's already annoying enough to have to type \"WHERE pid !=\npg_backend_pid()\" to exclude my own backend, and I usually need it quite often.\nUnless we add some new view which integrate that, something like\npg_stat_activity_except_me with a better name. I also don't see how we could\njoin a new dedicated view with the old one without that information.\n\n> - application_name? (well, this one could change on reload, so I am\n> lying).\n\nNo, it can change at any time. And the fact that it's not transactional makes\nit quite convenient for poor man progress reporting. For instance in powa I\nuse that to report what the bgworker is currently working on, and this has\nalready proven to be useful.\n\n\n", "msg_date": "Tue, 27 Apr 2021 09:26:11 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Addition of authenticated ID to pg_stat_activity" }, { "msg_contents": "On Tue, Apr 27, 2021 at 09:26:11AM +0800, Julien Rouhaud wrote:\n> -1. It's already annoying enough to have to type \"WHERE pid !=\n> pg_backend_pid()\" to exclude my own backend, and I usually need it quite often.\n> Unless we add some new view which integrate that, something like\n> pg_stat_activity_except_me with a better name. I also don't see how we could\n> join a new dedicated view with the old one without that information.\n\nErr, sorry for the confusion. What I meant here is to also keep the\nPID in pg_stat_activity, but also add it to this new view to be able\nto join things across the board.\n--\nMichael", "msg_date": "Tue, 27 Apr 2021 10:54:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Addition of authenticated ID to pg_stat_activity" }, { "msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Mon, Apr 26, 2021 at 03:21:46PM -0400, Stephen Frost wrote:\n> > * Andres Freund (andres@anarazel.de) wrote:\n> >> I'm getting a bit worried about the incremental increase in\n> >> pg_stat_activity width - it's probably by far the view that's most\n> >> viewed interactively. I think we need to be careful not to add too niche\n> >> things to it. This is especially true for columns that may be wider.\n> >> \n> >> It'd be bad for discoverability, but perhaps something like this, that's\n> >> not that likely to be used interactively, would be better done as a\n> >> separate function that would need to be used explicitly?\n> > \n> > I mean.. we already have separate functions and views for this, though\n> > they're auth-method-specific currently, but also provide more details,\n> > since it isn't actually a \"one size fits all\" kind of thing like this\n> > entire approach is imagining it to be.\n> \n> Referring to pg_stat_ssl and pg_stat_gssapi here, right? Yes, that\n> would be very limited as this leads to no visibility for LDAP, all\n> password-based authentications and more.\n\nYes, of course. The point being made was that we could do the same for\nthe other auth methods rather than adding something to pg_stat_activity.\n\n> I am wondering if we should take this as an occasion to move some data\n> out of pg_stat_activity into a separate biew, dedicated to the data\n> related to the connection that remains set to the same value for the\n> duration of a backend's life, as of the following set:\n> - the backend PID\n> - client_addr\n> - client_hostname\n> - client_port\n> - authenticated ID\n> - application_name? (well, this one could change on reload, so I am\n> lying).\n\napplication_name certainly changes, as pointed out elsewhere.\n\n> It would be tempting to move the database name and the username but\n> these are popular fields with monitoring. Maybe we could name that\n> pg_stat_connection_status, pg_stat_auth_status or just\n> pg_stat_connection?\n\nI don't know that there's really any of the existing fields that\naren't \"popular fields with monitoring\".. The issue that Andres brought\nup wasn't about monitoring though- it was about users looking\ninteractively. Monitoring systems can adjust their queries for the new\nmajor version to do whatever joins, et al, they need and that's a\nonce-per-major-version to do. On the other hand, people doing:\n\ntable pg_stat_activity;\n\nWould like to get the info they really want out of that and not anything\nelse. If we're going to adjust the fields returned from that then\nthat's really the lens we should use.\n\nSo, what fields are people really looking at when querying\npg_stat_activity interactively? User, database, pid, last query,\ntransaction start, query start, state, wait event info, maybe backend\nxmin/xid? I doubt most people looking at pg_stat_activity interactively\nactually care about the non-user backends (autovacuum, et al).\n\nThanks,\n\nStephen", "msg_date": "Tue, 27 Apr 2021 12:40:29 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Addition of authenticated ID to pg_stat_activity" }, { "msg_contents": "On Tue, Apr 27, 2021 at 09:59:18AM +0900, Michael Paquier wrote:\n> On Mon, Apr 26, 2021 at 03:21:46PM -0400, Stephen Frost wrote:\n> > * Andres Freund (andres@anarazel.de) wrote:\n> >> I'm getting a bit worried about the incremental increase in\n> >> pg_stat_activity width - it's probably by far the view that's most\n> >> viewed interactively. I think we need to be careful not to add too niche\n> >> things to it. This is especially true for columns that may be wider.\n> >> \n> >> It'd be bad for discoverability, but perhaps something like this, that's\n> >> not that likely to be used interactively, would be better done as a\n> >> separate function that would need to be used explicitly?\n> > \n> > I mean.. we already have separate functions and views for this, though\n> > they're auth-method-specific currently, but also provide more details,\n> > since it isn't actually a \"one size fits all\" kind of thing like this\n> > entire approach is imagining it to be.\n> \n> I am wondering if we should take this as an occasion to move some data\n> out of pg_stat_activity into a separate biew, dedicated to the data\n> related to the connection that remains set to the same value for the\n> duration of a backend's life, as of the following set:\n> - the backend PID\n> - client_addr\n> - client_hostname\n> - client_port\n> - authenticated ID\n> - application_name? (well, this one could change on reload, so I am\n> lying).\n\n+backend type\n+leader_PID\n\n> It would be tempting to move the database name and the username but\n> these are popular fields with monitoring. Maybe we could name that\n> pg_stat_connection_status, pg_stat_auth_status or just\n> pg_stat_connection?\n\nMaybe - there could also be a trivial view which JOINs pg_stat_activity and\npg_stat_connection ON (pid).\n\nTechnically I think it could also move backend_start/backend_xmin, but it'd be\nodd to move them if the other timestamp/xid columns stayed in pg_stat_activity.\n\nThere's no reason that pg_stat_connection would *have* to be \"static\" per\nconnction, right ? That's just how you're defining what would be included.\n\nStephen wrote:\n> Would like to get the info they really want out of that and not anything\n> else. If we're going to adjust the fields returned from that then\n> that's really the lens we should use.\n> \n> So, what fields are people really looking at when querying\n> pg_stat_activity interactively? User, database, pid, last query,\n> transaction start, query start, state, wait event info, maybe backend\n> xmin/xid? I doubt most people looking at pg_stat_activity interactively\n> actually care about the non-user backends (autovacuum, et al).\n\nI think the narrow/userfacing view would exclude only the OID/XID fields:\n\n datid | oid | | |\n usesysid | oid | | |\n backend_xid | xid | | |\n backend_xmin | xid | | |\n\nI think interactive users *would* care about other backend types - they're\nfrequently wondering \"what's going on?\"\n\nTBH, query text is often so long that I have to write left(query,33), and then\nthe idea of a \"userfacing\" variant loses its appeal, since it's necessary to\nenumerate columns anyway.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 27 Apr 2021 12:07:03 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Addition of authenticated ID to pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn 2021-04-27 12:40:29 -0400, Stephen Frost wrote:\n> So, what fields are people really looking at when querying\n> pg_stat_activity interactively? User, database, pid, last query,\n> transaction start, query start, state, wait event info, maybe backend\n> xmin/xid? I doubt most people looking at pg_stat_activity interactively\n> actually care about the non-user backends (autovacuum, et al).\n\nNot representative, but I personally am about as often interested in one\nof the non-connection processes as the connection\nones. E.g. investigating what is autovacuum's bottleneck, are\ncheckpointer / wal writer / bgwriter io bound or keeping up, etc.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 27 Apr 2021 11:24:57 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Addition of authenticated ID to pg_stat_activity" }, { "msg_contents": "On Tue, Apr 27, 2021 at 8:25 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-04-27 12:40:29 -0400, Stephen Frost wrote:\n> > So, what fields are people really looking at when querying\n> > pg_stat_activity interactively? User, database, pid, last query,\n> > transaction start, query start, state, wait event info, maybe backend\n> > xmin/xid? I doubt most people looking at pg_stat_activity interactively\n> > actually care about the non-user backends (autovacuum, et al).\n>\n> Not representative, but I personally am about as often interested in one\n> of the non-connection processes as the connection\n> ones. E.g. investigating what is autovacuum's bottleneck, are\n> checkpointer / wal writer / bgwriter io bound or keeping up, etc.\n\nI definitely use it all the time to monitor autovacuum all the time.\nThe others as well regularly, but autovacuum continuously. I also see\na lot of people doing things like \"from pg_stat_activity where query\nlike '%mytablename%'\" where they'd want both any regular queries and\nany autovacuums currently processing the table.\n\nI'd say client address is also pretty common to identify which set of\napp servers connections are coming in from -- but client port and\nclient hostname are a lot less interesting. But it'd be kind of weird\nto split those out.\n\nFor *interactive use* I'd find pretty much all other columns\ninteresting and commonly used. Probably not that interested in the\noids of the database and user, but again they are the cheap ones. We\ncould get rid of the joints if we only showed the oids, but in\ninteractive use it's really the names that are interesting. But if\nwe're just trying to save column count, I'd say get rid of datid and\nusesysid.\n\nI'd hold everything else as interesting.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Thu, 29 Apr 2021 16:56:42 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Addition of authenticated ID to pg_stat_activity" }, { "msg_contents": "On Thu, Apr 29, 2021 at 04:56:42PM +0200, Magnus Hagander wrote:\n> I definitely use it all the time to monitor autovacuum all the time.\n> The others as well regularly, but autovacuum continuously. I also see\n> a lot of people doing things like \"from pg_stat_activity where query\n> like '%mytablename%'\" where they'd want both any regular queries and\n> any autovacuums currently processing the table.\n\nWhen it comes to development work, I also look at things different\nthan backend connections, checkpointer and WAL writer included.\n\n> I'd say client address is also pretty common to identify which set of\n> app servers connections are coming in from -- but client port and\n> client hostname are a lot less interesting. But it'd be kind of weird\n> to split those out.\n\nYes, I agree that it would be confusing to split the client_* fields\nacross multiple views.\n\n> For *interactive use* I'd find pretty much all other columns\n> interesting and commonly used. Probably not that interested in the\n> oids of the database and user, but again they are the cheap ones. We\n> could get rid of the joints if we only showed the oids, but in\n> interactive use it's really the names that are interesting. But if\n> we're just trying to save column count, I'd say get rid of datid and\n> usesysid.\n> \n> I'd hold everything else as interesting.\n\nYes, you have an argument here about the removal of usesysid and\ndatid. Now I find joins involving OIDs to be much more natural than\nthe object names, because that's the base of what we use in the\ncatalogs.\n\nNot sure if we would be able to agree on something here, but the\nbarrier to what a session and a connection hold is thin when it comes\nto roles and application_name. Thinking more about that, I would be\nreally tempted to get to do a more straight split with data that's\nassociated to a session, to a transaction and to a connection, say:\n1) pg_stat_session, data that may change.\n- PID\n- leader PID\n- the role name\n- role ID\n- application_name\n- wait_event_type\n- wait_event\n2) pg_stat_connection, static data associated to a connection.\n- PID\n- database name\n- database OID\n- client_addr\n- client_hostname\n- client_port\n- backend_start\n- authn ID\n- backend_type\n3) pg_stat_transaction, or pg_stat_activity, for the transactional\nactivity.\n- PID\n- xact_start\n- query_start\n- backend_xid\n- state_change\n- query string\n- query ID\n- state\n\nOr I could just drop a new function that fetches the authn ID for a\ngiven PID, even if this makes things potentially less consistent when\nit comes to the lookup of PgBackendStatus, guarantee given now by\npg_stat_get_activity().\n--\nMichael", "msg_date": "Mon, 17 May 2021 13:35:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Addition of authenticated ID to pg_stat_activity" }, { "msg_contents": "On Mon, May 17, 2021 at 6:35 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Apr 29, 2021 at 04:56:42PM +0200, Magnus Hagander wrote:\n> > I definitely use it all the time to monitor autovacuum all the time.\n> > The others as well regularly, but autovacuum continuously. I also see\n> > a lot of people doing things like \"from pg_stat_activity where query\n> > like '%mytablename%'\" where they'd want both any regular queries and\n> > any autovacuums currently processing the table.\n>\n> When it comes to development work, I also look at things different\n> than backend connections, checkpointer and WAL writer included.\n\nWhile I think we should optimize these primarily for users and not\ndevelopers, I definitely do those things as well. In particular wait\nevents for the background processes.\n\n\n> > I'd say client address is also pretty common to identify which set of\n> > app servers connections are coming in from -- but client port and\n> > client hostname are a lot less interesting. But it'd be kind of weird\n> > to split those out.\n>\n> Yes, I agree that it would be confusing to split the client_* fields\n> across multiple views.\n>\n> > For *interactive use* I'd find pretty much all other columns\n> > interesting and commonly used. Probably not that interested in the\n> > oids of the database and user, but again they are the cheap ones. We\n> > could get rid of the joints if we only showed the oids, but in\n> > interactive use it's really the names that are interesting. But if\n> > we're just trying to save column count, I'd say get rid of datid and\n> > usesysid.\n> >\n> > I'd hold everything else as interesting.\n>\n> Yes, you have an argument here about the removal of usesysid and\n> datid. Now I find joins involving OIDs to be much more natural than\n> the object names, because that's the base of what we use in the\n> catalogs.\n\nAgreed. And I'm not sure the actual gain is that big if we can just\nremove oid columns...\n\n\n> Not sure if we would be able to agree on something here, but the\n> barrier to what a session and a connection hold is thin when it comes\n> to roles and application_name. Thinking more about that, I would be\n> really tempted to get to do a more straight split with data that's\n> associated to a session, to a transaction and to a connection, say:\n> 1) pg_stat_session, data that may change.\n> - PID\n> - leader PID\n> - the role name\n> - role ID\n> - application_name\n> - wait_event_type\n> - wait_event\n> 2) pg_stat_connection, static data associated to a connection.\n> - PID\n> - database name\n> - database OID\n> - client_addr\n> - client_hostname\n> - client_port\n> - backend_start\n> - authn ID\n> - backend_type\n> 3) pg_stat_transaction, or pg_stat_activity, for the transactional\n> activity.\n> - PID\n> - xact_start\n> - query_start\n> - backend_xid\n> - state_change\n> - query string\n> - query ID\n> - state\n\nThis seems extremely user-unfriendly to me.\n\nI mean. Timestamps are nso split out between different views so you\ncan't track the process iwthout it. And surely wait_event info is\n*extremely* related to things like what query is running and what\nstate the session is in. And putting backend_type off in a separate\nview means most queries are going to have to join that in anyway. Or\ninclude it in all views. And if we're forcing the majority of queries\nto join multiple views, what have we actually gained?\n\nBased on your list above, I'd definitely want at least (1) and (2) to\nbe in the same one, but they'd have to also gain at least the database\noid/name and backend_type, and maybe also backend_start.\n\nSo basically, it would be moving out client_*, and authn_id. If we're\ndoing that then as you say maybe pg_stat_connection is a good name and\ncould then *also* gain the information that's currently in the ssl and\ngss views for a net simplification.\n\ntld;dr; I think we have to be really careful here or the cure is going\nto be way worse than the disease.\n\n> Or I could just drop a new function that fetches the authn ID for a\n> given PID, even if this makes things potentially less consistent when\n> it comes to the lookup of PgBackendStatus, guarantee given now by\n> pg_stat_get_activity().\n\nWell, the authnid will never change so I'm not sure the consistency\npart is a big problem? Or maybe I'm misunderstanding the risk you're\nreferring to?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Mon, 17 May 2021 10:28:49 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Addition of authenticated ID to pg_stat_activity" }, { "msg_contents": "On Mon, May 17, 2021 at 10:28:49AM +0200, Magnus Hagander wrote:\n> On Mon, May 17, 2021 at 6:35 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> Not sure if we would be able to agree on something here, but the\n>> barrier to what a session and a connection hold is thin when it comes\n>> to roles and application_name. Thinking more about that, I would be\n>> really tempted to get to do a more straight split with data that's\n>> associated to a session, to a transaction and to a connection, say:\n>> 1) pg_stat_session, data that may change.\n>> - PID\n>> - leader PID\n>> - the role name\n>> - role ID\n>> - application_name\n>> - wait_event_type\n>> - wait_event\n>> 2) pg_stat_connection, static data associated to a connection.\n>> - PID\n>> - database name\n>> - database OID\n>> - client_addr\n>> - client_hostname\n>> - client_port\n>> - backend_start\n>> - authn ID\n>> - backend_type\n>> 3) pg_stat_transaction, or pg_stat_activity, for the transactional\n>> activity.\n>> - PID\n>> - xact_start\n>> - query_start\n>> - backend_xid\n>> - state_change\n>> - query string\n>> - query ID\n>> - state\n> \n> This seems extremely user-unfriendly to me.\n> \n> I mean. Timestamps are nso split out between different views so you\n> can't track the process iwthout it. And surely wait_event info is\n> *extremely* related to things like what query is running and what\n> state the session is in. And putting backend_type off in a separate\n> view means most queries are going to have to join that in anyway. Or\n> include it in all views. And if we're forcing the majority of queries\n> to join multiple views, what have we actually gained?\n> \n> Based on your list above, I'd definitely want at least (1) and (2) to\n> be in the same one, but they'd have to also gain at least the database\n> oid/name and backend_type, and maybe also backend_start.\n\nOkay.\n\n> So basically, it would be moving out client_*, and authn_id.\n\nSo that would mean the addition of one new catalog view, called\npg_stat_connection, with the following fields:\n- PID\n- all three client_*\n- authn ID\nI can live with this split. Thoughts from others?\n\n> If we're\n> doing that then as you say maybe pg_stat_connection is a good name and\n> could then *also* gain the information that's currently in the ssl and\n> gss views for a net simplification.\n\nI am less enthutiastic about this addition. SSL and GSSAPI have no\nfields in common, so that would bloat the view for connection data\nwith mostly NULL fields most of the time.\n\n> tld;dr; I think we have to be really careful here or the cure is going\n> to be way worse than the disease.\n\nAgreed.\n\n>> Or I could just drop a new function that fetches the authn ID for a\n>> given PID, even if this makes things potentially less consistent when\n>> it comes to the lookup of PgBackendStatus, guarantee given now by\n>> pg_stat_get_activity().\n> \n> Well, the authnid will never change so I'm not sure the consistency\n> part is a big problem? Or maybe I'm misunderstanding the risk you're\n> referring to?\n\nI just mean to keep the consistency we have now with one single call\nof pg_stat_get_activity() for each catalog view, so as we still fetch\nonce a consistent copy of all PgBackendStatus entries in this code\npath.\n--\nMichael", "msg_date": "Tue, 18 May 2021 11:20:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Addition of authenticated ID to pg_stat_activity" }, { "msg_contents": "On Tue, May 18, 2021 at 11:20:49AM +0900, Michael Paquier wrote:\n> So that would mean the addition of one new catalog view, called\n> pg_stat_connection, with the following fields:\n> - PID\n> - all three client_*\n> - authn ID\n> I can live with this split. Thoughts from others?\n\nJust to make the discussion move on, attached is an updated version\ndoing that.\n--\nMichael", "msg_date": "Fri, 21 May 2021 13:28:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Addition of authenticated ID to pg_stat_activity" }, { "msg_contents": "Hi Michael,\n\n> Just to make the discussion move on, attached is an updated version\n> doing that.\n\nThe code seems OK, but I have mixed feelings about the way that the\nVIEW currently works.\n\nHere is what I get when a single user is connected via a UNIX socket:\n\n43204 (master) =# select * from pg_stat_connection;\n pid | authenticated_id | client_addr | client_hostname | client_port\n-------+------------------+-------------+-----------------+-------------\n 25806 | | | |\n 25808 | | | |\n 43204 | | | | -1\n 25804 | | | |\n 25803 | | | |\n 25805 | | | |\n(6 rows)\n\nI bet we could be more user-friendly than this. To begin with, the\ndocumentation says:\n\n+ <para>\n+ The <structname>pg_stat_connection</structname> view will have one row\n+ per server process, showing information related to\n+ the current connection of that process.\n+ </para>\n\n[...]\n\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>client_addr</structfield> <type>inet</type>\n+ </para>\n+ <para>\n+ IP address of the client connected to this backend.\n+ If this field is null, it indicates either that the client is\n+ connected via a Unix socket on the server machine or that this is an\n+ internal process such as autovacuum.\n+ </para></entry>\n+ </row>\n\nAny reason why we shouldn't simply exclude internal processes from the\nview? Do they have a connection that the VIEW could show?\n\nSecondly, maybe instead of magic constants like -1, we could use an\nadditional text column, e.g. connection_type: \"unix\"? Thirdly, not\nsure if client_hostname is really needed, isn't address:port pair\nenough to identify the client? Lastly, introducing a new GUC for\ntruncating values in a single view, which can only be set at server\nstart, doesn't strike me as a great idea. What is the worst-case\nscenario if instead we will always allocate\n`strlen(MyProcPort->authn_id)` and the user will truncate the result\nmanually if needed?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 19 Jul 2021 16:56:24 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Addition of authenticated ID to pg_stat_activity" }, { "msg_contents": "On Mon, Jul 19, 2021 at 04:56:24PM +0300, Aleksander Alekseev wrote:\n> Any reason why we shouldn't simply exclude internal processes from the\n> view? Do they have a connection that the VIEW could show?\n\nYeah, they don't really have any useful information. Still, I kept\nthat mainly as a matter of consistency with pg_stat_activity, and this\ncan be useful to find out about internal processes without relying on\nan extra check based on pg_stat_activity.backend_type. Perhaps we\ncould just add a check in system_views.sql based on the NULL-ness of\nclient_port.\n\n> Secondly, maybe instead of magic constants like -1, we could use an\n> additional text column, e.g. connection_type: \"unix\"?\n\nI am not really incline to break that more, as told by\npg_stat_get_activity() there are cases where this looks useful:\n/*\n * Unix sockets always reports NULL for host and -1 for\n * port, so it's possible to tell the difference to\n * connections we have no permissions to view, or with\n * errors.\n */\n\n> Thirdly, not\n> sure if client_hostname is really needed, isn't address:port pair\n> enough to identify the client?\n\nIt seems to me that this is still useful to know about\nPort->remote_hostname.\n\n> Lastly, introducing a new GUC for\n> truncating values in a single view, which can only be set at server\n> start, doesn't strike me as a great idea. What is the worst-case\n> scenario if instead we will always allocate\n> `strlen(MyProcPort->authn_id)` and the user will truncate the result\n> manually if needed?\n\nThe authenticated ID could be a SSL DN longer than the default of\n128kB that this patch is proposing. I think that it is a good idea to\nprovide some way to the user to be able to control that without a\nrecompilation.\n--\nMichael", "msg_date": "Wed, 21 Jul 2021 13:21:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Addition of authenticated ID to pg_stat_activity" }, { "msg_contents": "On Wed, Jul 21, 2021 at 01:21:17PM +0900, Michael Paquier wrote:\n> The authenticated ID could be a SSL DN longer than the default of\n> 128kB that this patch is proposing. I think that it is a good idea to\n> provide some way to the user to be able to control that without a\n> recompilation.\n\nI got to think about this patch more for the last couple of days, and\nI'd still think that having a GUC to control how much shared memory we\nneed for the authenticated ID in each BackendStatusArray. Now, the\nthread has been idle for two months now, and it does not seem to\nattract much attention. This also includes a split of\npg_stat_activity for client_addr, client_hostname and client_port into\na new catalog, which may be hard to justify for this feature. So I am\ndropping the patch.\n--\nMichael", "msg_date": "Fri, 1 Oct 2021 10:08:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Addition of authenticated ID to pg_stat_activity" } ]
[ { "msg_contents": "Hi,\n\nIn gistinitpage, pageSize variable looks redundant, instead we could\njust pass BLCKSZ. This will be consistent with its peers\nBloomInitPage, brin_page_init and SpGistInitPage. Attaching a small\npatch. Thoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 26 Apr 2021 08:42:30 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Remove redundant variable pageSize in gistinitpage" }, { "msg_contents": "On 26.04.21 05:12, Bharath Rupireddy wrote:\n> In gistinitpage, pageSize variable looks redundant, instead we could\n> just pass BLCKSZ. This will be consistent with its peers\n> BloomInitPage, brin_page_init and SpGistInitPage. Attaching a small\n> patch. Thoughts?\n\nCommitted.\n\nThis code was new in this form in PG14 \n(16fa9b2b30a357b4aea982bd878ec2e5e002dbcc), so it made sense to clean it \nup now.\n\n\n\n", "msg_date": "Fri, 25 Jun 2021 08:03:13 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Remove redundant variable pageSize in gistinitpage" } ]
[ { "msg_contents": "Hi \n\nI think I may found a bug when using streaming in logical replication. Could anyone please take a look at this?\n\nHere's what I did to produce the problem.\nI set logical_decoding_work_mem and created multiple publications at publisher, created multiple subscriptions with \"streaming = on\" at subscriber.\nHowever, an assertion failed at publisher when I COMMIT and ROLLBACK multiple transactions at the same time.\n\nThe log reported a FailedAssertion:\nTRAP: FailedAssertion(\"txn->size == 0\", File: \"reorderbuffer.c\", Line: 3465, PID: 911730)\n\nThe problem happens both in synchronous mode and asynchronous mode. When there are only one or two publications, It doesn't seem to happen. (In my case, there are 8 publications and the failure always happened). \n\nThe scripts and the log are attached. It took me about 4 minutes to run the script on my machine.\nPlease contact me if you need more specific info for the problem.\n\nRegards\nTang", "msg_date": "Mon, 26 Apr 2021 07:15:34 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "[BUG] \"FailedAssertion\" reported when streaming in logical\n replication" }, { "msg_contents": "On Mon, 26 Apr 2021 at 12:45 PM, tanghy.fnst@fujitsu.com <\ntanghy.fnst@fujitsu.com> wrote:\n\n> Hi\n>\n> I think I may found a bug when using streaming in logical replication.\n> Could anyone please take a look at this?\n>\n> Here's what I did to produce the problem.\n> I set logical_decoding_work_mem and created multiple publications at\n> publisher, created multiple subscriptions with \"streaming = on\" at\n> subscriber.\n> However, an assertion failed at publisher when I COMMIT and ROLLBACK\n> multiple transactions at the same time.\n>\n> The log reported a FailedAssertion:\n> TRAP: FailedAssertion(\"txn->size == 0\", File: \"reorderbuffer.c\", Line:\n> 3465, PID: 911730)\n>\n> The problem happens both in synchronous mode and asynchronous mode. When\n> there are only one or two publications, It doesn't seem to happen. (In my\n> case, there are 8 publications and the failure always happened).\n>\n> The scripts and the log are attached. It took me about 4 minutes to run\n> the script on my machine.\n> Please contact me if you need more specific info for the problem.\n\n\n\nThanks for reporting. I will look into it.\n\n> --\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, 26 Apr 2021 at 12:45 PM, tanghy.fnst@fujitsu.com <tanghy.fnst@fujitsu.com> wrote:Hi \n\nI think I may found a bug when using streaming in logical replication. Could anyone please take a look at this?\n\nHere's what I did to produce the problem.\nI set logical_decoding_work_mem and created multiple publications at publisher, created multiple subscriptions with \"streaming = on\" at subscriber.\nHowever, an assertion failed at publisher when I COMMIT and ROLLBACK multiple transactions at the same time.\n\nThe log reported a FailedAssertion:\nTRAP: FailedAssertion(\"txn->size == 0\", File: \"reorderbuffer.c\", Line: 3465, PID: 911730)\n\nThe problem happens both in synchronous mode and asynchronous mode. When there are only one or two publications, It doesn't seem to happen. (In my case, there are 8 publications and the failure always happened). \n\nThe scripts and the log are attached. It took me about 4 minutes to run the script on my machine.\nPlease contact me if you need more specific info for the problem.Thanks for reporting. I will look into it.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 26 Apr 2021 13:26:47 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] \"FailedAssertion\" reported when streaming in logical\n replication" }, { "msg_contents": "On Mon, Apr 26, 2021 at 1:26 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, 26 Apr 2021 at 12:45 PM, tanghy.fnst@fujitsu.com <tanghy.fnst@fujitsu.com> wrote:\n>>\n>> Hi\n>>\n>> I think I may found a bug when using streaming in logical replication. Could anyone please take a look at this?\n>>\n>> Here's what I did to produce the problem.\n>> I set logical_decoding_work_mem and created multiple publications at publisher, created multiple subscriptions with \"streaming = on\" at subscriber.\n>> However, an assertion failed at publisher when I COMMIT and ROLLBACK multiple transactions at the same time.\n>>\n>> The log reported a FailedAssertion:\n>> TRAP: FailedAssertion(\"txn->size == 0\", File: \"reorderbuffer.c\", Line: 3465, PID: 911730)\n>>\n>> The problem happens both in synchronous mode and asynchronous mode. When there are only one or two publications, It doesn't seem to happen. (In my case, there are 8 publications and the failure always happened).\n>>\n>> The scripts and the log are attached. It took me about 4 minutes to run the script on my machine.\n>> Please contact me if you need more specific info for the problem.\n>\n>\n>\n> Thanks for reporting. I will look into it.\n\nI am able to reproduce this and I think I have done the initial investigation.\n\nThe cause of the issue is that, this transaction has only one change\nand that change is XLOG_HEAP2_NEW_CID, which is added through\nSnapBuildProcessNewCid. Basically, when we add any changes through\nSnapBuildProcessChange we set the base snapshot but when we add\nSnapBuildProcessNewCid this we don't set the base snapshot, because\nthere is nothing to be done for this change. Now, this transaction is\nidentified as the biggest transaction with non -partial changes, and\nnow in ReorderBufferStreamTXN, it will return immediately because the\nbase_snapshot is NULL. I think the fix should be while selecting the\nlargest transaction in ReorderBufferLargestTopTXN, we should check the\nbase_snapshot should not be NULL.\n\nI will think more about this and post the patch.\n\n From the core dump, we can see that base_snapshot is 0x0 and\nntuplecids = 1, and txn_flags = 1 also proves that it has a new\ncommand id change. And the size of the txn also shows that it has\nonly one change and that is REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID\nbecause in that case, the change size will be just the\nsizeof(ReorderBufferChange) which is 80.\n\n(gdb) p *txn\n$4 = {txn_flags = 1, xid = 1115, toplevel_xid = 0, gid = 0x0,\nfirst_lsn = 1061159120, final_lsn = 0, end_lsn = 0, toptxn = 0x0,\nrestart_decoding_lsn = 958642624,\n origin_id = 0, origin_lsn = 0, commit_time = 0, base_snapshot = 0x0,\nbase_snapshot_lsn = 0, base_snapshot_node = {prev = 0x0, next = 0x0},\nsnapshot_now = 0x0,\n command_id = 4294967295, nentries = 1, nentries_mem = 1, changes =\n{head = {prev = 0x3907c18, next = 0x3907c18}}, tuplecids = {head =\n{prev = 0x39073d8,\n next = 0x39073d8}}, ntuplecids = 1, tuplecid_hash = 0x0,\ntoast_hash = 0x0, subtxns = {head = {prev = 0x30f1cd8, next =\n0x30f1cd8}}, nsubtxns = 0,\n ninvalidations = 0, invalidations = 0x0, node = {prev = 0x30f1a98,\nnext = 0x30c64f8}, size = 80, total_size = 80, concurrent_abort =\nfalse,\n output_plugin_private = 0x0}\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 26 Apr 2021 17:55:34 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] \"FailedAssertion\" reported when streaming in logical\n replication" }, { "msg_contents": "On Mon, Apr 26, 2021 at 5:55 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Apr 26, 2021 at 1:26 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, 26 Apr 2021 at 12:45 PM, tanghy.fnst@fujitsu.com <tanghy.fnst@fujitsu.com> wrote:\n> >>\n> >> Hi\n> >>\n> >> I think I may found a bug when using streaming in logical replication. Could anyone please take a look at this?\n> >>\n> >> Here's what I did to produce the problem.\n> >> I set logical_decoding_work_mem and created multiple publications at publisher, created multiple subscriptions with \"streaming = on\" at subscriber.\n> >> However, an assertion failed at publisher when I COMMIT and ROLLBACK multiple transactions at the same time.\n> >>\n> >> The log reported a FailedAssertion:\n> >> TRAP: FailedAssertion(\"txn->size == 0\", File: \"reorderbuffer.c\", Line: 3465, PID: 911730)\n> >>\n> >> The problem happens both in synchronous mode and asynchronous mode. When there are only one or two publications, It doesn't seem to happen. (In my case, there are 8 publications and the failure always happened).\n> >>\n> >> The scripts and the log are attached. It took me about 4 minutes to run the script on my machine.\n> >> Please contact me if you need more specific info for the problem.\n> >\n> >\n> >\n> > Thanks for reporting. I will look into it.\n>\n> I am able to reproduce this and I think I have done the initial investigation.\n>\n> The cause of the issue is that, this transaction has only one change\n> and that change is XLOG_HEAP2_NEW_CID, which is added through\n> SnapBuildProcessNewCid. Basically, when we add any changes through\n> SnapBuildProcessChange we set the base snapshot but when we add\n> SnapBuildProcessNewCid this we don't set the base snapshot, because\n> there is nothing to be done for this change. Now, this transaction is\n> identified as the biggest transaction with non -partial changes, and\n> now in ReorderBufferStreamTXN, it will return immediately because the\n> base_snapshot is NULL.\n>\n\nYour analysis sounds correct to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 26 Apr 2021 18:59:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] \"FailedAssertion\" reported when streaming in logical\n replication" }, { "msg_contents": "On Mon, Apr 26, 2021 at 6:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Apr 26, 2021 at 5:55 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > I am able to reproduce this and I think I have done the initial investigation.\n> >\n> > The cause of the issue is that, this transaction has only one change\n> > and that change is XLOG_HEAP2_NEW_CID, which is added through\n> > SnapBuildProcessNewCid. Basically, when we add any changes through\n> > SnapBuildProcessChange we set the base snapshot but when we add\n> > SnapBuildProcessNewCid this we don't set the base snapshot, because\n> > there is nothing to be done for this change. Now, this transaction is\n> > identified as the biggest transaction with non -partial changes, and\n> > now in ReorderBufferStreamTXN, it will return immediately because the\n> > base_snapshot is NULL.\n> >\n>\n> Your analysis sounds correct to me.\n>\n\nThanks, I have attached a patch to fix this.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 26 Apr 2021 19:52:04 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] \"FailedAssertion\" reported when streaming in logical\n replication" }, { "msg_contents": "On Mon, Apr 26, 2021 at 7:52 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Apr 26, 2021 at 6:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Apr 26, 2021 at 5:55 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > I am able to reproduce this and I think I have done the initial investigation.\n> > >\n> > > The cause of the issue is that, this transaction has only one change\n> > > and that change is XLOG_HEAP2_NEW_CID, which is added through\n> > > SnapBuildProcessNewCid. Basically, when we add any changes through\n> > > SnapBuildProcessChange we set the base snapshot but when we add\n> > > SnapBuildProcessNewCid this we don't set the base snapshot, because\n> > > there is nothing to be done for this change. Now, this transaction is\n> > > identified as the biggest transaction with non -partial changes, and\n> > > now in ReorderBufferStreamTXN, it will return immediately because the\n> > > base_snapshot is NULL.\n> > >\n> >\n> > Your analysis sounds correct to me.\n> >\n>\n> Thanks, I have attached a patch to fix this.\n\nThere is also one very silly mistake in below condition, basically,\nonce we got any transaction for next transaction it is unconditionally\nselecting without comparing the size because largest != NULL is wrong,\nideally this should be largest == NULL, basically, if we haven't\nselect any transaction then only we can approve next transaction\nwithout comparing the size.\n\nif ((largest != NULL || txn->total_size > largest_size) &&\n(txn->base_snapshot != NULL) && (txn->total_size > 0) &&\n!(rbtxn_has_incomplete_tuple(txn)))\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 27 Apr 2021 11:03:02 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] \"FailedAssertion\" reported when streaming in logical\n replication" }, { "msg_contents": "On Mon, Apr 26, 2021 at 7:52 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Apr 26, 2021 at 6:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Apr 26, 2021 at 5:55 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > I am able to reproduce this and I think I have done the initial investigation.\n> > >\n> > > The cause of the issue is that, this transaction has only one change\n> > > and that change is XLOG_HEAP2_NEW_CID, which is added through\n> > > SnapBuildProcessNewCid. Basically, when we add any changes through\n> > > SnapBuildProcessChange we set the base snapshot but when we add\n> > > SnapBuildProcessNewCid this we don't set the base snapshot, because\n> > > there is nothing to be done for this change. Now, this transaction is\n> > > identified as the biggest transaction with non -partial changes, and\n> > > now in ReorderBufferStreamTXN, it will return immediately because the\n> > > base_snapshot is NULL.\n> > >\n> >\n> > Your analysis sounds correct to me.\n> >\n>\n> Thanks, I have attached a patch to fix this.\n>\n\nCan't we use 'txns_by_base_snapshot_lsn' list for this purpose? It is\nensured in ReorderBufferSetBaseSnapshot that we always assign\nbase_snapshot to a top-level transaction if the current is a known\nsubxact. I think that will be true because we always form xid-subxid\nrelation before processing each record in\nLogicalDecodingProcessRecord.\n\nFew other minor comments:\n1. I think we can update the comments atop function ReorderBufferLargestTopTXN.\n2. minor typo in comments atop ReorderBufferLargestTopTXN \"...There is\na scope of optimization here such that we can select the largest\ntransaction which has complete changes...\". In this 'complete' should\nbe incomplete. This is not related to this patch but I think we can\nfix it along with this because anyway we are going to change\nsurrounding comments.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 27 Apr 2021 11:43:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] \"FailedAssertion\" reported when streaming in logical\n replication" }, { "msg_contents": "On Tue, Apr 27, 2021 at 11:43 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Apr 26, 2021 at 7:52 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, Apr 26, 2021 at 6:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Apr 26, 2021 at 5:55 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > I am able to reproduce this and I think I have done the initial investigation.\n> > > >\n> > > > The cause of the issue is that, this transaction has only one change\n> > > > and that change is XLOG_HEAP2_NEW_CID, which is added through\n> > > > SnapBuildProcessNewCid. Basically, when we add any changes through\n> > > > SnapBuildProcessChange we set the base snapshot but when we add\n> > > > SnapBuildProcessNewCid this we don't set the base snapshot, because\n> > > > there is nothing to be done for this change. Now, this transaction is\n> > > > identified as the biggest transaction with non -partial changes, and\n> > > > now in ReorderBufferStreamTXN, it will return immediately because the\n> > > > base_snapshot is NULL.\n> > > >\n> > >\n> > > Your analysis sounds correct to me.\n> > >\n> >\n> > Thanks, I have attached a patch to fix this.\n> >\n>\n> Can't we use 'txns_by_base_snapshot_lsn' list for this purpose? It is\n> ensured in ReorderBufferSetBaseSnapshot that we always assign\n> base_snapshot to a top-level transaction if the current is a known\n> subxact. I think that will be true because we always form xid-subxid\n> relation before processing each record in\n> LogicalDecodingProcessRecord.\n\nYeah, we can do that, but here we are only interested in top\ntransactions and this list will give us sub-transaction as well so we\nwill have to skip it in the below if condition. So I think using\ntoplevel_by_lsn and skipping the txn without base_snapshot in below if\ncondition will be cheaper compared to process all the transactions\nwith base snapshot i.e. txns_by_base_snapshot_lsn and skipping the\nsub-transactions in the below if conditions. Whats your thoughts on\nthis?\n\n\n> Few other minor comments:\n> 1. I think we can update the comments atop function ReorderBufferLargestTopTXN.\n> 2. minor typo in comments atop ReorderBufferLargestTopTXN \"...There is\n> a scope of optimization here such that we can select the largest\n> transaction which has complete changes...\". In this 'complete' should\n> be incomplete. This is not related to this patch but I think we can\n> fix it along with this because anyway we are going to change\n> surrounding comments.\n\nI will work on these in the next version.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Apr 2021 11:50:01 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] \"FailedAssertion\" reported when streaming in logical\n replication" }, { "msg_contents": "On Tue, Apr 27, 2021 at 11:50 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Apr 27, 2021 at 11:43 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Apr 26, 2021 at 7:52 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Mon, Apr 26, 2021 at 6:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Mon, Apr 26, 2021 at 5:55 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > >\n> > > > > I am able to reproduce this and I think I have done the initial investigation.\n> > > > >\n> > > > > The cause of the issue is that, this transaction has only one change\n> > > > > and that change is XLOG_HEAP2_NEW_CID, which is added through\n> > > > > SnapBuildProcessNewCid. Basically, when we add any changes through\n> > > > > SnapBuildProcessChange we set the base snapshot but when we add\n> > > > > SnapBuildProcessNewCid this we don't set the base snapshot, because\n> > > > > there is nothing to be done for this change. Now, this transaction is\n> > > > > identified as the biggest transaction with non -partial changes, and\n> > > > > now in ReorderBufferStreamTXN, it will return immediately because the\n> > > > > base_snapshot is NULL.\n> > > > >\n> > > >\n> > > > Your analysis sounds correct to me.\n> > > >\n> > >\n> > > Thanks, I have attached a patch to fix this.\n> > >\n> >\n> > Can't we use 'txns_by_base_snapshot_lsn' list for this purpose? It is\n> > ensured in ReorderBufferSetBaseSnapshot that we always assign\n> > base_snapshot to a top-level transaction if the current is a known\n> > subxact. I think that will be true because we always form xid-subxid\n> > relation before processing each record in\n> > LogicalDecodingProcessRecord.\n>\n> Yeah, we can do that, but here we are only interested in top\n> transactions and this list will give us sub-transaction as well so we\n> will have to skip it in the below if condition.\n>\n\nI am not so sure about this point. I have explained above why I think\nthere won't be any subtransactions in this. Can you please let me know\nwhat am I missing if anything?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 27 Apr 2021 12:05:32 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] \"FailedAssertion\" reported when streaming in logical\n replication" }, { "msg_contents": "On Tue, Apr 27, 2021 at 12:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > Can't we use 'txns_by_base_snapshot_lsn' list for this purpose? It is\n> > > ensured in ReorderBufferSetBaseSnapshot that we always assign\n> > > base_snapshot to a top-level transaction if the current is a known\n> > > subxact. I think that will be true because we always form xid-subxid\n> > > relation before processing each record in\n> > > LogicalDecodingProcessRecord.\n> >\n> > Yeah, we can do that, but here we are only interested in top\n> > transactions and this list will give us sub-transaction as well so we\n> > will have to skip it in the below if condition.\n> >\n>\n> I am not so sure about this point. I have explained above why I think\n> there won't be any subtransactions in this. Can you please let me know\n> what am I missing if anything?\n\nGot your point, yeah this will only have top transactions so we can\nuse this. I will change this in the next patch. In fact we can put\nan assert that it should not be an sub transaction?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Apr 2021 12:21:53 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] \"FailedAssertion\" reported when streaming in logical\n replication" }, { "msg_contents": "On Tue, Apr 27, 2021 at 12:22 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Apr 27, 2021 at 12:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > Can't we use 'txns_by_base_snapshot_lsn' list for this purpose? It is\n> > > > ensured in ReorderBufferSetBaseSnapshot that we always assign\n> > > > base_snapshot to a top-level transaction if the current is a known\n> > > > subxact. I think that will be true because we always form xid-subxid\n> > > > relation before processing each record in\n> > > > LogicalDecodingProcessRecord.\n> > >\n> > > Yeah, we can do that, but here we are only interested in top\n> > > transactions and this list will give us sub-transaction as well so we\n> > > will have to skip it in the below if condition.\n> > >\n> >\n> > I am not so sure about this point. I have explained above why I think\n> > there won't be any subtransactions in this. Can you please let me know\n> > what am I missing if anything?\n>\n> Got your point, yeah this will only have top transactions so we can\n> use this. I will change this in the next patch. In fact we can put\n> an assert that it should not be an sub transaction?\n>\n\nRight. It is good to have an assert.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 27 Apr 2021 12:55:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] \"FailedAssertion\" reported when streaming in logical\n replication" }, { "msg_contents": "On Tue, Apr 27, 2021 at 12:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 27, 2021 at 12:22 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, Apr 27, 2021 at 12:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > Can't we use 'txns_by_base_snapshot_lsn' list for this purpose? It is\n> > > > > ensured in ReorderBufferSetBaseSnapshot that we always assign\n> > > > > base_snapshot to a top-level transaction if the current is a known\n> > > > > subxact. I think that will be true because we always form xid-subxid\n> > > > > relation before processing each record in\n> > > > > LogicalDecodingProcessRecord.\n> > > >\n> > > > Yeah, we can do that, but here we are only interested in top\n> > > > transactions and this list will give us sub-transaction as well so we\n> > > > will have to skip it in the below if condition.\n> > > >\n> > >\n> > > I am not so sure about this point. I have explained above why I think\n> > > there won't be any subtransactions in this. Can you please let me know\n> > > what am I missing if anything?\n> >\n> > Got your point, yeah this will only have top transactions so we can\n> > use this. I will change this in the next patch. In fact we can put\n> > an assert that it should not be an sub transaction?\n> >\n>\n> Right. It is good to have an assert.\n\nI have modified the patch based on the above comments.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 27 Apr 2021 17:18:07 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] \"FailedAssertion\" reported when streaming in logical\n replication" }, { "msg_contents": "> I have modified the patch based on the above comments.\r\n\r\nThanks for your patch.\r\nI tested again after applying your patch and the problem is fixed.\r\n\r\nRegards\r\nTang\r\n", "msg_date": "Wed, 28 Apr 2021 06:55:11 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [BUG] \"FailedAssertion\" reported when streaming in logical\n replication" }, { "msg_contents": "On Wed, Apr 28, 2021 at 12:25 PM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> > I have modified the patch based on the above comments.\n>\n> Thanks for your patch.\n> I tested again after applying your patch and the problem is fixed.\n\nThanks for confirming.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 28 Apr 2021 13:02:33 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] \"FailedAssertion\" reported when streaming in logical\n replication" }, { "msg_contents": "On Wed, Apr 28, 2021 at 1:02 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Apr 28, 2021 at 12:25 PM tanghy.fnst@fujitsu.com\n> <tanghy.fnst@fujitsu.com> wrote:\n> >\n> > > I have modified the patch based on the above comments.\n> >\n> > Thanks for your patch.\n> > I tested again after applying your patch and the problem is fixed.\n>\n> Thanks for confirming.\n\nI tried to think about how to write a test case for this scenario, but\nI think it will not be possible to generate an automated test case for\nthis. Basically, we need 2 concurrent transactions and out of that,\nwe need one transaction which just has processed only one change i.e\nXLOG_HEAP2_NEW_CID and another transaction should overflow the logical\ndecoding work mem, so that we select the wrong transaction which\ndoesn't have the base snapshot. But how to control that the\ntransaction which is performing the DDL just write the\nXLOG_HEAP2_NEW_CID wal and before it writes any other WAL we should\nget the WAl from other transaction which overflows the buffer.\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 29 Apr 2021 11:48:27 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] \"FailedAssertion\" reported when streaming in logical\n replication" }, { "msg_contents": "\r\nOn Thursday, April 29, 2021 3:18 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote\r\n\r\n>I tried to think about how to write a test case for this scenario, but\r\n>I think it will not be possible to generate an automated test case for this. \r\n>Basically, we need 2 concurrent transactions and out of that,\r\n>we need one transaction which just has processed only one change i.e\r\n>XLOG_HEAP2_NEW_CID and another transaction should overflow the logical\r\n>decoding work mem, so that we select the wrong transaction which\r\n>doesn't have the base snapshot. But how to control that the\r\n>transaction which is performing the DDL just write the\r\n>XLOG_HEAP2_NEW_CID wal and before it writes any other WAL we should\r\n>get the WAl from other transaction which overflows the buffer.\r\n\r\nThanks for your updating.\r\nActually, I tried to make the automated test for the problem, too. But made no process on this.\r\nAgreed on your opinion \" not be possible to generate an automated test case for this \".\r\n\r\nIf anyone figure out a good solution for the test automation of this case. \r\nPlease be kind to share that with us. Thanks.\r\n\r\nRegards,\r\nTang\r\n", "msg_date": "Thu, 29 Apr 2021 06:39:41 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [BUG] \"FailedAssertion\" reported when streaming in logical\n replication" }, { "msg_contents": "On Thu, Apr 29, 2021 at 12:09 PM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n>\n> On Thursday, April 29, 2021 3:18 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote\n>\n> >I tried to think about how to write a test case for this scenario, but\n> >I think it will not be possible to generate an automated test case for this.\n> >Basically, we need 2 concurrent transactions and out of that,\n> >we need one transaction which just has processed only one change i.e\n> >XLOG_HEAP2_NEW_CID and another transaction should overflow the logical\n> >decoding work mem, so that we select the wrong transaction which\n> >doesn't have the base snapshot. But how to control that the\n> >transaction which is performing the DDL just write the\n> >XLOG_HEAP2_NEW_CID wal and before it writes any other WAL we should\n> >get the WAl from other transaction which overflows the buffer.\n>\n> Thanks for your updating.\n> Actually, I tried to make the automated test for the problem, too. But made no process on this.\n> Agreed on your opinion \" not be possible to generate an automated test case for this \".\n\nThanks for trying this out.\n\n> If anyone figure out a good solution for the test automation of this case.\n> Please be kind to share that with us. Thanks.\n\n+1\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 29 Apr 2021 16:23:41 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] \"FailedAssertion\" reported when streaming in logical\n replication" }, { "msg_contents": "On Tue, Apr 27, 2021 at 5:18 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> I have modified the patch based on the above comments.\n>\n\nThe patch looks good to me. I have slightly modified the comments and\ncommit message. See, what you think of the attached? I think we can\nleave the test for this as there doesn't seem to be an easy way to\nautomate it.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 29 Apr 2021 17:23:57 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] \"FailedAssertion\" reported when streaming in logical\n replication" }, { "msg_contents": "On Thu, Apr 29, 2021 at 5:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 27, 2021 at 5:18 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > I have modified the patch based on the above comments.\n> >\n>\n> The patch looks good to me. I have slightly modified the comments and\n> commit message. See, what you think of the attached? I think we can\n> leave the test for this as there doesn't seem to be an easy way to\n> automate it.\n\nYour changes look good to me.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 30 Apr 2021 10:08:04 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] \"FailedAssertion\" reported when streaming in logical\n replication" } ]
[ { "msg_contents": "Hi,\n\nWhile reviewing one of the logical replication patches, I found that\nwe do not include hint messages to display the actual option which has\nbeen specified more than once in case of redundant option error. I\nfelt including this will help in easily identifying the error, users\nwill not have to search through the statement to identify where the\nactual error is present. Attached a patch for the same.\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Mon, 26 Apr 2021 17:28:55 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Enhanced error message to include hint messages for redundant options\n error" }, { "msg_contents": "On Mon, Apr 26, 2021 at 5:29 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Hi,\n>\n> While reviewing one of the logical replication patches, I found that\n> we do not include hint messages to display the actual option which has\n> been specified more than once in case of redundant option error. I\n> felt including this will help in easily identifying the error, users\n> will not have to search through the statement to identify where the\n> actual error is present. Attached a patch for the same.\n> Thoughts?\n\n+1. A more detailed error will be useful in a rare scenario like users\nhave specified duplicate options along with a lot of other options,\nthey will know for which option the error is thrown by the server.\n\nInstead of adding errhint or errdetail, how about just changing the\nerror message itself to something like \"option \\\"%s\\\" specified more\nthan once\" or \"parameter \\\"%s\\\" specified more than once\" like we have\nin other places in the code?\n\nComments on the patch:\n\n1) generally errhint and errdetail messages should end with a period\n\".\", please see their usage in other places.\n+ errhint(\"Option \\\"streaming\\\" specified more\nthan once\")));\n\n2) I think it should be errdetail instead of errhint, because you are\ngiving more information about the error, but not hinting user how to\novercome it. If you had to say something like \"Remove duplicate\n\\\"password\\\" option.\" or \"The \\\"password\\\" option is specified more\nthan once. Remove all the duplicates.\", then it makes sense to use\nerrhint.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 26 Apr 2021 17:49:21 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Mon, Apr 26, 2021 at 5:49 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Apr 26, 2021 at 5:29 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > While reviewing one of the logical replication patches, I found that\n> > we do not include hint messages to display the actual option which has\n> > been specified more than once in case of redundant option error. I\n> > felt including this will help in easily identifying the error, users\n> > will not have to search through the statement to identify where the\n> > actual error is present. Attached a patch for the same.\n> > Thoughts?\n>\n\n+1 for improving the error\n\n> Comments on the patch:\n>\n> 1) generally errhint and errdetail messages should end with a period\n> \".\", please see their usage in other places.\n> + errhint(\"Option \\\"streaming\\\" specified more\n> than once\")));\n>\n> 2) I think it should be errdetail instead of errhint, because you are\n> giving more information about the error, but not hinting user how to\n> overcome it. If you had to say something like \"Remove duplicate\n> \\\"password\\\" option.\" or \"The \\\"password\\\" option is specified more\n> than once. Remove all the duplicates.\", then it makes sense to use\n> errhint.\n\nI agree this should be errdetail.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 26 Apr 2021 18:18:29 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Mon, Apr 26, 2021 at 5:49 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Apr 26, 2021 at 5:29 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > While reviewing one of the logical replication patches, I found that\n> > we do not include hint messages to display the actual option which has\n> > been specified more than once in case of redundant option error. I\n> > felt including this will help in easily identifying the error, users\n> > will not have to search through the statement to identify where the\n> > actual error is present. Attached a patch for the same.\n> > Thoughts?\n>\n> +1. A more detailed error will be useful in a rare scenario like users\n> have specified duplicate options along with a lot of other options,\n> they will know for which option the error is thrown by the server.\n>\n> Instead of adding errhint or errdetail, how about just changing the\n> error message itself to something like \"option \\\"%s\\\" specified more\n> than once\" or \"parameter \\\"%s\\\" specified more than once\" like we have\n> in other places in the code?\n>\n\nBoth seemed fine but I preferred using errdetail as I felt it is\nslightly better for the details to appear in a new line.\n\n> Comments on the patch:\n>\n> 1) generally errhint and errdetail messages should end with a period\n> \".\", please see their usage in other places.\n> + errhint(\"Option \\\"streaming\\\" specified more\n> than once\")));\n>\n\nModified it.\n\n> 2) I think it should be errdetail instead of errhint, because you are\n> giving more information about the error, but not hinting user how to\n> overcome it. If you had to say something like \"Remove duplicate\n> \\\"password\\\" option.\" or \"The \\\"password\\\" option is specified more\n> than once. Remove all the duplicates.\", then it makes sense to use\n> errhint.\n\nModified it.\n\nAttached patch for the same.\n\nRegards,\nVignesh", "msg_date": "Mon, 26 Apr 2021 19:01:55 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Mon, Apr 26, 2021 at 6:18 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Apr 26, 2021 at 5:49 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Mon, Apr 26, 2021 at 5:29 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > While reviewing one of the logical replication patches, I found that\n> > > we do not include hint messages to display the actual option which has\n> > > been specified more than once in case of redundant option error. I\n> > > felt including this will help in easily identifying the error, users\n> > > will not have to search through the statement to identify where the\n> > > actual error is present. Attached a patch for the same.\n> > > Thoughts?\n> >\n>\n> +1 for improving the error\n>\n> > Comments on the patch:\n> >\n> > 1) generally errhint and errdetail messages should end with a period\n> > \".\", please see their usage in other places.\n> > + errhint(\"Option \\\"streaming\\\" specified more\n> > than once\")));\n> >\n> > 2) I think it should be errdetail instead of errhint, because you are\n> > giving more information about the error, but not hinting user how to\n> > overcome it. If you had to say something like \"Remove duplicate\n> > \\\"password\\\" option.\" or \"The \\\"password\\\" option is specified more\n> > than once. Remove all the duplicates.\", then it makes sense to use\n> > errhint.\n>\n> I agree this should be errdetail.\n\nThanks for the comment, I have modified and shared the v2 patch\nattached in the previous mail.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 26 Apr 2021 19:03:06 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Mon, Apr 26, 2021 at 7:02 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, Apr 26, 2021 at 5:49 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Mon, Apr 26, 2021 at 5:29 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > While reviewing one of the logical replication patches, I found that\n> > > we do not include hint messages to display the actual option which has\n> > > been specified more than once in case of redundant option error. I\n> > > felt including this will help in easily identifying the error, users\n> > > will not have to search through the statement to identify where the\n> > > actual error is present. Attached a patch for the same.\n> > > Thoughts?\n> >\n> > +1. A more detailed error will be useful in a rare scenario like users\n> > have specified duplicate options along with a lot of other options,\n> > they will know for which option the error is thrown by the server.\n> >\n> > Instead of adding errhint or errdetail, how about just changing the\n> > error message itself to something like \"option \\\"%s\\\" specified more\n> > than once\" or \"parameter \\\"%s\\\" specified more than once\" like we have\n> > in other places in the code?\n> >\n>\n> Both seemed fine but I preferred using errdetail as I felt it is\n> slightly better for the details to appear in a new line.\n\nThanks! IMO, it is better to change the error message to \"option\n\\\"%s\\\" specified more than once\" instead of adding an error detail.\nLet's hear other hackers' opinions.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 26 Apr 2021 19:15:10 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On 2021-Apr-26, Bharath Rupireddy wrote:\n\n> Thanks! IMO, it is better to change the error message to \"option\n> \\\"%s\\\" specified more than once\" instead of adding an error detail.\n> Let's hear other hackers' opinions.\n\nMany other places have the message \"conflicting or redundant options\",\nand then parser_errposition() shows the problem option. That seems\npretty unhelpful, so whenever the problem is the redundancy I would have\nthe message be explicit about that, and about which option is the\nproblem:\n errmsg(\"option \\\"%s\\\" specified more than once\", \"someopt\")\nDo note that wording it this way means only one translatable message,\nnot dozens.\n\nIn some cases it is possible that you'd end up with two messages, one\nfor \"redundant\" and one for the other ways for options to conflict with\nothers; for example collationcmds.c has one that's not as obvious, and\nforce_quote/force_quote_all in COPY have their own thing too.\n\nI think we should do parser_errposition() wherever possible, in\naddition to the wording change.\n\n-- \n�lvaro Herrera Valdivia, Chile\n<inflex> really, I see PHP as like a strange amalgamation of C, Perl, Shell\n<crab> inflex: you know that \"amalgam\" means \"mixture with mercury\",\n more or less, right?\n<crab> i.e., \"deadly poison\"\n\n\n", "msg_date": "Mon, 26 Apr 2021 10:36:29 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Mon, Apr 26, 2021 at 8:06 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Apr-26, Bharath Rupireddy wrote:\n>\n> > Thanks! IMO, it is better to change the error message to \"option\n> > \\\"%s\\\" specified more than once\" instead of adding an error detail.\n> > Let's hear other hackers' opinions.\n>\n> Many other places have the message \"conflicting or redundant options\",\n> and then parser_errposition() shows the problem option. That seems\n> pretty unhelpful, so whenever the problem is the redundancy I would have\n> the message be explicit about that, and about which option is the\n> problem:\n> errmsg(\"option \\\"%s\\\" specified more than once\", \"someopt\")\n> Do note that wording it this way means only one translatable message,\n> not dozens.\n\nI agree that we can just be clear about the problem. Looks like the\nmajority of the errors \"conflicting or redundant options\" are for\nredundant options. So, wherever \"conflicting or redundant options\"\nexists: 1) change the message to \"option \\\"%s\\\" specified more than\nonce\" and remove parser_errposition if it's there because the option\nname in the error message would give the info with which user can\npoint to the location 2) change the message to something like \"option\n\\\"%s\\\" is conflicting with option \\\"%s\\\"\".\n\n> In some cases it is possible that you'd end up with two messages, one\n> for \"redundant\" and one for the other ways for options to conflict with\n> others; for example collationcmds.c has one that's not as obvious, and\n\nAnd yes, we need to divide up the message for conflicting and\nredundant options on a case-to-case basis.\n\nIn createdb: we just need to modify the error message to \"conflicting\noptions\" or we could just get rid of errdetail and have the error\nmessage directly saying \"LOCALE cannot be specified together with\nLC_COLLATE or LC_CTYPE\". Redundant options are just caught in the\nabove for loop in createdb.\n if (dlocale && (dcollate || dctype))\n ereport(ERROR,\n (errcode(ERRCODE_SYNTAX_ERROR),\n errmsg(\"conflicting or redundant options\"),\n errdetail(\"LOCALE cannot be specified together with\nLC_COLLATE or LC_CTYPE.\")));\n\nIn AlterDatabase: we can remove parser_errposition because the option\nname in the error message would give the right information.\n if (list_length(stmt->options) != 1)\n ereport(ERROR,\n (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n errmsg(\"option \\\"%s\\\" cannot be specified with\nother options\",\n dtablespace->defname),\n parser_errposition(pstate, dtablespace->location)));\n\nIn compute_common_attribute: we can remove goto duplicate_error and\nhave the message change to \"option \\\"%s\\\" specified more than once\".\n\nIn DefineType: we need to rework for loop.\nI found another problem with collationcmds.c is that it doesn't error\nout if some of the options are specified more than once, something\nlike below. I think the option checking \"for loop\" in DefineCollation\nneeds to be reworked.\nCREATE COLLATION case_insensitive (provider = icu, provider =\nsomeother locale = '@colStrength=secondary', deterministic = false,\ndeterministic = true);\n\n> force_quote/force_quote_all in COPY have their own thing too.\n\nWe can remove the errhint for force_not_null and force_null along with\nthe error message wording change to \"option \\\"%s\\\" specified more than\nonce\".\n\nUpon looking at error \"conflicting or redundant options\" instances, to\ndo the above we need a lot of code changes, I'm not sure that will be\nacceptable.\n\nOne thing is that all the option checking for loops are doing these\nthings in common: 1) fetching the values bool, int, float, string of\nthe options 2) redundant checking. I feel we need to invent a common\nAPI to which we pass in 1) a list of allowed options for a particular\ncommand, we can have these as static data structure\n{allowed_option_name, data_type}, 2) a list of user specified options\n3) the API will return a list of fetched i.e. parsed values\n{user_specified_option_name, data_type, value}. Maybe the API can\nreturn a hash table of these values so that the callers can look up\nfaster for the required option. The advantage of this API is that we\ndon't need to have many for-loops for options checking in the code.\nI'm not sure it is worth doing though. Thoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 26 Apr 2021 21:10:25 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On 2021-Apr-26, Bharath Rupireddy wrote:\n\n> I agree that we can just be clear about the problem. Looks like the\n> majority of the errors \"conflicting or redundant options\" are for\n> redundant options. So, wherever \"conflicting or redundant options\"\n> exists: 1) change the message to \"option \\\"%s\\\" specified more than\n> once\" and remove parser_errposition if it's there because the option\n> name in the error message would give the info with which user can\n> point to the location\n\nHmm, I would keep the parser_errposition() even if the option name is\nmentioned in the error message. There's no harm in being a little\nredundant, with both the option name and the error cursor showing the\nsame thing.\n\n> 2) change the message to something like \"option \\\"%s\\\" is conflicting\n> with option \\\"%s\\\"\".\n\nMaybe, but since these would all be special cases, I think we'd need to\ndiscuss them individually. I would suggest that in order not to stall\nthis patch, these cases should all stay as \"redundant or conflicting\noptions\" -- that is, avoid any further change apart from exactly the\nthing you came here to change. You can submit a 0002 patch to change\nthose other errors. That way, even if those changes end up rejected for\nwhatever reason, you still got your 0001 done (which would change the\nbulk of \"conflicting or redundant\" error to the \"option %s already\nspecified\" error). Some progress is better than none.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n<Schwern> It does it in a really, really complicated way\n<crab> why does it need to be complicated?\n<Schwern> Because it's MakeMaker.\n\n\n", "msg_date": "Mon, 26 Apr 2021 11:54:32 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Mon, Apr 26, 2021 at 9:24 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Apr-26, Bharath Rupireddy wrote:\n>\n> > I agree that we can just be clear about the problem. Looks like the\n> > majority of the errors \"conflicting or redundant options\" are for\n> > redundant options. So, wherever \"conflicting or redundant options\"\n> > exists: 1) change the message to \"option \\\"%s\\\" specified more than\n> > once\" and remove parser_errposition if it's there because the option\n> > name in the error message would give the info with which user can\n> > point to the location\n>\n> Hmm, I would keep the parser_errposition() even if the option name is\n> mentioned in the error message. There's no harm in being a little\n> redundant, with both the option name and the error cursor showing the\n> same thing.\n\nAgreed.\n\n> > 2) change the message to something like \"option \\\"%s\\\" is conflicting\n> > with option \\\"%s\\\"\".\n>\n> Maybe, but since these would all be special cases, I think we'd need to\n> discuss them individually. I would suggest that in order not to stall\n> this patch, these cases should all stay as \"redundant or conflicting\n> options\" -- that is, avoid any further change apart from exactly the\n> thing you came here to change. You can submit a 0002 patch to change\n> those other errors. That way, even if those changes end up rejected for\n> whatever reason, you still got your 0001 done (which would change the\n> bulk of \"conflicting or redundant\" error to the \"option %s already\n> specified\" error). Some progress is better than none.\n\n+1 to have all the conflicting options error message changes as 0002\npatch or I'm okay even if we discuss those changes after the 0001\npatch goes in.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Apr 2021 06:20:57 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Mon, Apr 26, 2021 at 9:10 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> I found another problem with collationcmds.c is that it doesn't error\n> out if some of the options are specified more than once, something\n> like below. I think the option checking \"for loop\" in DefineCollation\n> needs to be reworked.\n> CREATE COLLATION case_insensitive (provider = icu, provider =\n> someother locale = '@colStrength=secondary', deterministic = false,\n> deterministic = true);\n\nI'm thinking that the above problem should be discussed separately. I\nwill start a new thread soon on this.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Apr 2021 06:23:56 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Tue, Apr 27, 2021 at 6:23 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Apr 26, 2021 at 9:10 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > I found another problem with collationcmds.c is that it doesn't error\n> > out if some of the options are specified more than once, something\n> > like below. I think the option checking \"for loop\" in DefineCollation\n> > needs to be reworked.\n> > CREATE COLLATION case_insensitive (provider = icu, provider =\n> > someother locale = '@colStrength=secondary', deterministic = false,\n> > deterministic = true);\n>\n> I'm thinking that the above problem should be discussed separately. I\n> will start a new thread soon on this.\n\nI started a separate thread -\nhttps://www.postgresql.org/message-id/CALj2ACWtL6fTLdyF4R_YkPtf1YEDb6FUoD5DGAki3rpD%2BsWqiA%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Apr 2021 17:13:33 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Mon, Apr 26, 2021 at 9:24 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Apr-26, Bharath Rupireddy wrote:\n>\n> > I agree that we can just be clear about the problem. Looks like the\n> > majority of the errors \"conflicting or redundant options\" are for\n> > redundant options. So, wherever \"conflicting or redundant options\"\n> > exists: 1) change the message to \"option \\\"%s\\\" specified more than\n> > once\" and remove parser_errposition if it's there because the option\n> > name in the error message would give the info with which user can\n> > point to the location\n>\n> Hmm, I would keep the parser_errposition() even if the option name is\n> mentioned in the error message. There's no harm in being a little\n> redundant, with both the option name and the error cursor showing the\n> same thing.\n>\n> > 2) change the message to something like \"option \\\"%s\\\" is conflicting\n> > with option \\\"%s\\\"\".\n>\n> Maybe, but since these would all be special cases, I think we'd need to\n> discuss them individually. I would suggest that in order not to stall\n> this patch, these cases should all stay as \"redundant or conflicting\n> options\" -- that is, avoid any further change apart from exactly the\n> thing you came here to change. You can submit a 0002 patch to change\n> those other errors. That way, even if those changes end up rejected for\n> whatever reason, you still got your 0001 done (which would change the\n> bulk of \"conflicting or redundant\" error to the \"option %s already\n> specified\" error). Some progress is better than none.\n\nThanks for the comments, please find the attached v3 patch which has\nthe change for the first part. I will make changes for 002 and post it\nsoon.\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Thu, 29 Apr 2021 22:17:58 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On 2021-Apr-29, vignesh C wrote:\n\n> Thanks for the comments, please find the attached v3 patch which has\n> the change for the first part.\n\nLooks good to me. I would only add parser_errposition() to the few\nerror sites missing that.\n\n\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Uno puede defenderse de los ataques; contra los elogios se esta indefenso\"\n\n\n", "msg_date": "Thu, 29 Apr 2021 13:14:17 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Thu, Apr 29, 2021 at 10:44 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Apr-29, vignesh C wrote:\n>\n> > Thanks for the comments, please find the attached v3 patch which has\n> > the change for the first part.\n>\n> Looks good to me. I would only add parser_errposition() to the few\n> error sites missing that.\n\nYes, we need to add parser_errposition as agreed in [1].\n\nI think we will have to make changes in compute_common_attribute as\nwell because the error in the duplicate_error goto statement is\nactually for the duplicate option specified more than once, we can do\nsomething like the attached. If it seems okay, it can be merged with\nthe main patch.\n\n[1] - https://www.postgresql.org/message-id/flat/CALj2ACUa%3DZM8QtOLPCHc7%3DWgFrx9P6-AgKQs8cmKLvNCvu7arQ%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 30 Apr 2021 08:16:15 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Fri, Apr 30, 2021 at 8:16 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Apr 29, 2021 at 10:44 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2021-Apr-29, vignesh C wrote:\n> >\n> > > Thanks for the comments, please find the attached v3 patch which has\n> > > the change for the first part.\n> >\n> > Looks good to me. I would only add parser_errposition() to the few\n> > error sites missing that.\n>\n> Yes, we need to add parser_errposition as agreed in [1].\n>\n> I think we will have to make changes in compute_common_attribute as\n> well because the error in the duplicate_error goto statement is\n> actually for the duplicate option specified more than once, we can do\n> something like the attached. If it seems okay, it can be merged with\n> the main patch.\n\n+ DefElem *duplicate_item = NULL;\n+\n if (strcmp(defel->defname, \"volatility\") == 0)\n {\n if (is_procedure)\n goto procedure_error;\n if (*volatility_item)\n- goto duplicate_error;\n+ duplicate_item = defel;\n\nIn this function, we already have the \"defel\" variable then I do not\nunderstand why you are using one extra variable and assigning defel to\nthat?\nIf the goal is to just improve the error message then you can simply\nuse defel->defname?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 30 Apr 2021 10:17:00 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Fri, Apr 30, 2021 at 10:17 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> In this function, we already have the \"defel\" variable then I do not\n> understand why you are using one extra variable and assigning defel to\n> that?\n> If the goal is to just improve the error message then you can simply\n> use defel->defname?\n\nYeah. I can do that. Thanks for the comment.\n\nWhile on this, I also removed the duplicate_error and procedure_error\ngoto statements, because IMHO, using goto statements is not an elegant\nway. I used boolean flags to do the job instead. See the attached and\nlet me know what you think.\n\nJust for completion, I also attached Vignesh's latest patch v3 as-is,\nin case anybody wants to review it.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 30 Apr 2021 10:43:21 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Fri, Apr 30, 2021 at 10:43 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Apr 30, 2021 at 10:17 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > In this function, we already have the \"defel\" variable then I do not\n> > understand why you are using one extra variable and assigning defel to\n> > that?\n> > If the goal is to just improve the error message then you can simply\n> > use defel->defname?\n>\n> Yeah. I can do that. Thanks for the comment.\n>\n> While on this, I also removed the duplicate_error and procedure_error\n> goto statements, because IMHO, using goto statements is not an elegant\n> way. I used boolean flags to do the job instead. See the attached and\n> let me know what you think.\n\nOkay, but I see one side effect of this, basically earlier on\nprocedure_error and duplicate_error we were not assigning anything to\noutput parameters, e.g. volatility_item, but now those values will be\nassigned with defel even if there is an error. So I think we should\nbetter avoid such change. But even if you want to do then better\ncheck for any impacts on the caller.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 30 Apr 2021 10:50:44 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Fri, Apr 30, 2021 at 10:51 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Apr 30, 2021 at 10:43 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Fri, Apr 30, 2021 at 10:17 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > In this function, we already have the \"defel\" variable then I do not\n> > > understand why you are using one extra variable and assigning defel to\n> > > that?\n> > > If the goal is to just improve the error message then you can simply\n> > > use defel->defname?\n> >\n> > Yeah. I can do that. Thanks for the comment.\n> >\n> > While on this, I also removed the duplicate_error and procedure_error\n> > goto statements, because IMHO, using goto statements is not an elegant\n> > way. I used boolean flags to do the job instead. See the attached and\n> > let me know what you think.\n>\n> Okay, but I see one side effect of this, basically earlier on\n> procedure_error and duplicate_error we were not assigning anything to\n> output parameters, e.g. volatility_item, but now those values will be\n> assigned with defel even if there is an error.\n\nYes, but on ereport(ERROR, we don't come back right? The txn gets\naborted and the control is not returned to the caller instead it will\ngo to sigjmp_buf of the backend.\n\n> So I think we should\n> better avoid such change. But even if you want to do then better\n> check for any impacts on the caller.\n\nAFAICS, there will not be any impact on the caller, as the control\ndoesn't return to the caller on error.\n\nAnd another good reason to remove the goto statements is that they\nhave return false; statements just to suppress the compiler and having\nthem after ereport(ERROR, doesn't make any sense to me.\n\nduplicate_error:\n ereport(ERROR,\n (errcode(ERRCODE_SYNTAX_ERROR),\n errmsg(\"conflicting or redundant options\"),\n parser_errposition(pstate, defel->location)));\n return false; /* keep compiler quiet */\n\nprocedure_error:\n ereport(ERROR,\n (errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),\n errmsg(\"invalid attribute in procedure definition\"),\n parser_errposition(pstate, defel->location)));\n return false;\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 30 Apr 2021 11:09:31 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Fri, Apr 30, 2021 at 11:09 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Apr 30, 2021 at 10:51 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Fri, Apr 30, 2021 at 10:43 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > On Fri, Apr 30, 2021 at 10:17 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > In this function, we already have the \"defel\" variable then I do not\n> > > > understand why you are using one extra variable and assigning defel to\n> > > > that?\n> > > > If the goal is to just improve the error message then you can simply\n> > > > use defel->defname?\n> > >\n> > > Yeah. I can do that. Thanks for the comment.\n> > >\n> > > While on this, I also removed the duplicate_error and procedure_error\n> > > goto statements, because IMHO, using goto statements is not an elegant\n> > > way. I used boolean flags to do the job instead. See the attached and\n> > > let me know what you think.\n> >\n> > Okay, but I see one side effect of this, basically earlier on\n> > procedure_error and duplicate_error we were not assigning anything to\n> > output parameters, e.g. volatility_item, but now those values will be\n> > assigned with defel even if there is an error.\n>\n> Yes, but on ereport(ERROR, we don't come back right? The txn gets\n> aborted and the control is not returned to the caller instead it will\n> go to sigjmp_buf of the backend.\n>\n> > So I think we should\n> > better avoid such change. But even if you want to do then better\n> > check for any impacts on the caller.\n>\n> AFAICS, there will not be any impact on the caller, as the control\n> doesn't return to the caller on error.\n\nI see.\n\nother comments\n\n if (strcmp(defel->defname, \"volatility\") == 0)\n {\n if (is_procedure)\n- goto procedure_error;\n+ is_procedure_error = true;\n if (*volatility_item)\n- goto duplicate_error;\n+ is_duplicate_error = true;\n\nAnother side effect I see is, in the above check earlier if\nis_procedure was true it was directly goto procedure_error, but now it\nwill also check the if (*volatility_item) and it may set\nis_duplicate_error also true, which seems wrong to me. I think you\ncan change it to\n\nif (is_procedure)\n is_procedure_error = true;\nelse if (*volatility_item)\n is_duplicate_error = true;\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 30 Apr 2021 11:23:32 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Fri, Apr 30, 2021 at 11:23 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Apr 30, 2021 at 11:09 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Fri, Apr 30, 2021 at 10:51 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Fri, Apr 30, 2021 at 10:43 AM Bharath Rupireddy\n> > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > >\n> > > > On Fri, Apr 30, 2021 at 10:17 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > In this function, we already have the \"defel\" variable then I do not\n> > > > > understand why you are using one extra variable and assigning defel to\n> > > > > that?\n> > > > > If the goal is to just improve the error message then you can simply\n> > > > > use defel->defname?\n> > > >\n> > > > Yeah. I can do that. Thanks for the comment.\n> > > >\n> > > > While on this, I also removed the duplicate_error and procedure_error\n> > > > goto statements, because IMHO, using goto statements is not an elegant\n> > > > way. I used boolean flags to do the job instead. See the attached and\n> > > > let me know what you think.\n> > >\n> > > Okay, but I see one side effect of this, basically earlier on\n> > > procedure_error and duplicate_error we were not assigning anything to\n> > > output parameters, e.g. volatility_item, but now those values will be\n> > > assigned with defel even if there is an error.\n> >\n> > Yes, but on ereport(ERROR, we don't come back right? The txn gets\n> > aborted and the control is not returned to the caller instead it will\n> > go to sigjmp_buf of the backend.\n> >\n> > > So I think we should\n> > > better avoid such change. But even if you want to do then better\n> > > check for any impacts on the caller.\n> >\n> > AFAICS, there will not be any impact on the caller, as the control\n> > doesn't return to the caller on error.\n>\n> I see.\n>\n> other comments\n>\n> if (strcmp(defel->defname, \"volatility\") == 0)\n> {\n> if (is_procedure)\n> - goto procedure_error;\n> + is_procedure_error = true;\n> if (*volatility_item)\n> - goto duplicate_error;\n> + is_duplicate_error = true;\n>\n> Another side effect I see is, in the above check earlier if\n> is_procedure was true it was directly goto procedure_error, but now it\n> will also check the if (*volatility_item) and it may set\n> is_duplicate_error also true, which seems wrong to me. I think you\n> can change it to\n>\n> if (is_procedure)\n> is_procedure_error = true;\n> else if (*volatility_item)\n> is_duplicate_error = true;\n\nThanks. Done that way, see the attached v3. Let's see what others has to say.\n\nAlso attaching Vignesh's v3 patch as-is, just for completion.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 30 Apr 2021 12:36:07 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Fri, Apr 30, 2021 at 5:06 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Apr 30, 2021 at 11:23 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Fri, Apr 30, 2021 at 11:09 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > On Fri, Apr 30, 2021 at 10:51 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > On Fri, Apr 30, 2021 at 10:43 AM Bharath Rupireddy\n> > > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > >\n> > > > > On Fri, Apr 30, 2021 at 10:17 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > > In this function, we already have the \"defel\" variable then I do not\n> > > > > > understand why you are using one extra variable and assigning defel to\n> > > > > > that?\n> > > > > > If the goal is to just improve the error message then you can simply\n> > > > > > use defel->defname?\n> > > > >\n> > > > > Yeah. I can do that. Thanks for the comment.\n> > > > >\n> > > > > While on this, I also removed the duplicate_error and procedure_error\n> > > > > goto statements, because IMHO, using goto statements is not an elegant\n> > > > > way. I used boolean flags to do the job instead. See the attached and\n> > > > > let me know what you think.\n> > > >\n> > > > Okay, but I see one side effect of this, basically earlier on\n> > > > procedure_error and duplicate_error we were not assigning anything to\n> > > > output parameters, e.g. volatility_item, but now those values will be\n> > > > assigned with defel even if there is an error.\n> > >\n> > > Yes, but on ereport(ERROR, we don't come back right? The txn gets\n> > > aborted and the control is not returned to the caller instead it will\n> > > go to sigjmp_buf of the backend.\n> > >\n> > > > So I think we should\n> > > > better avoid such change. But even if you want to do then better\n> > > > check for any impacts on the caller.\n> > >\n> > > AFAICS, there will not be any impact on the caller, as the control\n> > > doesn't return to the caller on error.\n> >\n> > I see.\n> >\n> > other comments\n> >\n> > if (strcmp(defel->defname, \"volatility\") == 0)\n> > {\n> > if (is_procedure)\n> > - goto procedure_error;\n> > + is_procedure_error = true;\n> > if (*volatility_item)\n> > - goto duplicate_error;\n> > + is_duplicate_error = true;\n> >\n> > Another side effect I see is, in the above check earlier if\n> > is_procedure was true it was directly goto procedure_error, but now it\n> > will also check the if (*volatility_item) and it may set\n> > is_duplicate_error also true, which seems wrong to me. I think you\n> > can change it to\n> >\n> > if (is_procedure)\n> > is_procedure_error = true;\n> > else if (*volatility_item)\n> > is_duplicate_error = true;\n>\n> Thanks. Done that way, see the attached v3. Let's see what others has to say.\n>\n\nHmmm - I am not so sure about those goto replacements. I think the\npoor goto has a bad reputation, but not all gotos are bad. I've met\nsome very nice gotos.\n\nEach goto here was doing exactly what it looked like it was doing,\nwhereas all these boolean replacements have now introduced subtle\ndifferences. e.g. now the *volatility_item = defel; assignment (and\nall similar assignments) will happen which previously did not happen\nat all. It leaves the reader wondering if assigning to those\nreferences could have any side-effects at the caller. Probably there\nare no problems at all....but can you be sure? Meanwhile, those\n\"inelegant\" gotos did not give any cause for such doubts.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 30 Apr 2021 19:19:03 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Fri, Apr 30, 2021 at 12:36 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > other comments\n> >\n> > if (strcmp(defel->defname, \"volatility\") == 0)\n> > {\n> > if (is_procedure)\n> > - goto procedure_error;\n> > + is_procedure_error = true;\n> > if (*volatility_item)\n> > - goto duplicate_error;\n> > + is_duplicate_error = true;\n> >\n> > Another side effect I see is, in the above check earlier if\n> > is_procedure was true it was directly goto procedure_error, but now it\n> > will also check the if (*volatility_item) and it may set\n> > is_duplicate_error also true, which seems wrong to me. I think you\n> > can change it to\n> >\n> > if (is_procedure)\n> > is_procedure_error = true;\n> > else if (*volatility_item)\n> > is_duplicate_error = true;\n>\n> Thanks. Done that way, see the attached v3. Let's see what others has to say.\n>\n> Also attaching Vignesh's v3 patch as-is, just for completion.\n\nLooking into this again, why not as shown below? IMHO, this way the\ncode will be logically the same as it was before the patch, basically\nwhy to process an extra statement ( *volatility_item = defel;) if we\nhave already decided to error.\n\n if (is_procedure)\n is_procedure_error = true;\nelse if (*volatility_item)\n is_duplicate_error = true;\nelse\n *volatility_item = defel;\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 30 Apr 2021 14:49:34 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Fri, Apr 30, 2021 at 2:49 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Looking into this again, why not as shown below? IMHO, this way the\n> code will be logically the same as it was before the patch, basically\n> why to process an extra statement ( *volatility_item = defel;) if we\n> have already decided to error.\n\nI changed my mind given the concerns raised on removing the goto\nstatements. We could just do as below:\n\ndiff --git a/src/backend/commands/functioncmds.c\nb/src/backend/commands/functioncmds.c\nindex 9548287217..1f1c74c379 100644\n--- a/src/backend/commands/functioncmds.c\n+++ b/src/backend/commands/functioncmds.c\n@@ -575,7 +575,7 @@ compute_common_attribute(ParseState *pstate,\n duplicate_error:\n ereport(ERROR,\n (errcode(ERRCODE_SYNTAX_ERROR),\n- errmsg(\"conflicting or redundant options\"),\n+ errmsg(\"option \\\"%s\\\" specified more than once\", defel->defname),\n parser_errposition(pstate, defel->location)));\n return false; /* keep compiler quiet */\n\nI'm not attaching above one line change as a patch, maybe Vignesh can\nmerge this into the main patch.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 1 May 2021 10:42:58 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Sat, May 1, 2021 at 10:43 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Apr 30, 2021 at 2:49 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Looking into this again, why not as shown below? IMHO, this way the\n> > code will be logically the same as it was before the patch, basically\n> > why to process an extra statement ( *volatility_item = defel;) if we\n> > have already decided to error.\n>\n> I changed my mind given the concerns raised on removing the goto\n> statements. We could just do as below:\n\nOkay, that make sense.\n\n> diff --git a/src/backend/commands/functioncmds.c\n> b/src/backend/commands/functioncmds.c\n> index 9548287217..1f1c74c379 100644\n> --- a/src/backend/commands/functioncmds.c\n> +++ b/src/backend/commands/functioncmds.c\n> @@ -575,7 +575,7 @@ compute_common_attribute(ParseState *pstate,\n> duplicate_error:\n> ereport(ERROR,\n> (errcode(ERRCODE_SYNTAX_ERROR),\n> - errmsg(\"conflicting or redundant options\"),\n> + errmsg(\"option \\\"%s\\\" specified more than once\", defel->defname),\n> parser_errposition(pstate, defel->location)));\n> return false; /* keep compiler quiet */\n>\n> I'm not attaching above one line change as a patch, maybe Vignesh can\n> merge this into the main patch.\n\n+1\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 1 May 2021 10:47:25 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Sat, May 1, 2021 at 10:47 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Sat, May 1, 2021 at 10:43 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Fri, Apr 30, 2021 at 2:49 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > Looking into this again, why not as shown below? IMHO, this way the\n> > > code will be logically the same as it was before the patch, basically\n> > > why to process an extra statement ( *volatility_item = defel;) if we\n> > > have already decided to error.\n> >\n> > I changed my mind given the concerns raised on removing the goto\n> > statements. We could just do as below:\n>\n> Okay, that make sense.\n>\n> > diff --git a/src/backend/commands/functioncmds.c\n> > b/src/backend/commands/functioncmds.c\n> > index 9548287217..1f1c74c379 100644\n> > --- a/src/backend/commands/functioncmds.c\n> > +++ b/src/backend/commands/functioncmds.c\n> > @@ -575,7 +575,7 @@ compute_common_attribute(ParseState *pstate,\n> > duplicate_error:\n> > ereport(ERROR,\n> > (errcode(ERRCODE_SYNTAX_ERROR),\n> > - errmsg(\"conflicting or redundant options\"),\n> > + errmsg(\"option \\\"%s\\\" specified more than once\", defel->defname),\n> > parser_errposition(pstate, defel->location)));\n> > return false; /* keep compiler quiet */\n> >\n> > I'm not attaching above one line change as a patch, maybe Vignesh can\n> > merge this into the main patch.\n\nThanks for the comments. I have merged the change into the attached patch.\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Sat, 1 May 2021 19:25:15 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Thu, Apr 29, 2021 at 10:44 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Apr-29, vignesh C wrote:\n>\n> > Thanks for the comments, please find the attached v3 patch which has\n> > the change for the first part.\n>\n> Looks good to me. I would only add parser_errposition() to the few\n> error sites missing that.\n\nI have not included parser_errposition as ParseState was not available\nfor these errors.\nThoughts?\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sat, 1 May 2021 19:26:44 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Sat, May 1, 2021 at 7:25 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > I'm not attaching above one line change as a patch, maybe Vignesh can\n> > > merge this into the main patch.\n>\n> Thanks for the comments. I have merged the change into the attached patch.\n> Thoughts?\n\nThanks! v4 basically LGTM. Can we park this in the current commitfest\nif not done already?\n\nUpon looking at the number of places where we have the \"option \\\"%s\\\"\nspecified more than once\" error, I, now strongly feel that we should\nuse goto duplicate_error approach like in compute_common_attribute, so\nthat we will have only one ereport(ERROR. We can change it in\nfollowing files: copy.c, dbcommands.c, extension.c,\ncompute_function_attributes, sequence.c, subscriptioncmds.c,\ntypecmds.c, user.c, walsender.c, pgoutput.c. This will reduce the LOC\ngreatly.\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 1 May 2021 21:01:48 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On 2021-May-01, vignesh C wrote:\n\n> On Thu, Apr 29, 2021 at 10:44 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2021-Apr-29, vignesh C wrote:\n> >\n> > > Thanks for the comments, please find the attached v3 patch which has\n> > > the change for the first part.\n> >\n> > Looks good to me. I would only add parser_errposition() to the few\n> > error sites missing that.\n> \n> I have not included parser_errposition as ParseState was not available\n> for these errors.\n\nYeah, it's tough to do that in a few of those such as validator\nfunctions, and I don't think we'd want to do that. However there are\nsome cases where we can easily add the parsestate as an argument -- for\nexample CreatePublication can get it in ProcessUtilitySlow and pass it\ndown to parse_publication_options; likewise for ExecuteDoStmt. I didn't\ncheck other places.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Sat, 1 May 2021 13:24:09 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Sat, May 1, 2021, 10:54 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2021-May-01, vignesh C wrote:\n\n> On Thu, Apr 29, 2021 at 10:44 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n> > >\n> > > On 2021-Apr-29, vignesh C wrote:\n> > >\n> > > > Thanks for the comments, please find the attached v3 patch which has\n> > > > the change for the first part.\n> > >\n> > > Looks good to me. I would only add parser_errposition() to the few\n> > > error sites missing that.\n> >\n> > I have not included parser_errposition as ParseState was not available\n> > for these errors.\n>\n> Yeah, it's tough to do that in a few of those such as validator\n> functions, and I don't think we'd want to do that. However there are\n> some cases where we can easily add the parsestate as an argument -- for\n> example CreatePublication can get it in ProcessUtilitySlow and pass it\n> down to parse_publication_options; likewise for ExecuteDoStmt. I didn't\n> check other places.\n>\n\nIMO, it's not good to change the function API just for showing up\nparse_position (which is there for cosmetic reasons I feel) in an error\nwhich actually has the option name clearly mentioned in the error message.\n\nBest Regards,\nBharath Rupireddy.\nEnterpriseDB.\n\n>\n\nOn Sat, May 1, 2021, 10:54 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2021-May-01, vignesh C wrote:\n> On Thu, Apr 29, 2021 at 10:44 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2021-Apr-29, vignesh C wrote:\n> >\n> > > Thanks for the comments, please find the attached v3 patch which has\n> > > the change for the first part.\n> >\n> > Looks good to me.  I would only add parser_errposition() to the few\n> > error sites missing that.\n> \n> I have not included parser_errposition as ParseState was not available\n> for these errors.\n\nYeah, it's tough to do that in a few of those such as validator\nfunctions, and I don't think we'd want to do that.  However there are\nsome cases where we can easily add the parsestate as an argument -- for\nexample CreatePublication can get it in ProcessUtilitySlow and pass it\ndown to parse_publication_options; likewise for ExecuteDoStmt.  I didn't\ncheck other places.IMO, it's not good to change the function API just for showing up parse_position (which is there for cosmetic reasons I feel) in an error which actually has the option name clearly mentioned in the error message.Best Regards,Bharath Rupireddy.EnterpriseDB.", "msg_date": "Sat, 1 May 2021 23:25:44 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On 2021-May-01, Bharath Rupireddy wrote:\n\n> IMO, it's not good to change the function API just for showing up\n> parse_position (which is there for cosmetic reasons I feel) in an error\n> which actually has the option name clearly mentioned in the error message.\n\nWhy not? We've done it before, I'm sure you can find examples in the\ngit log. The function API is not that critical -- these functions are\nmostly only called from ProcessUtility and friends.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Sat, 1 May 2021 18:57:14 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Sun, May 2, 2021 at 4:27 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-May-01, Bharath Rupireddy wrote:\n>\n> > IMO, it's not good to change the function API just for showing up\n> > parse_position (which is there for cosmetic reasons I feel) in an error\n> > which actually has the option name clearly mentioned in the error message.\n>\n> Why not? We've done it before, I'm sure you can find examples in the\n> git log. The function API is not that critical -- these functions are\n> mostly only called from ProcessUtility and friends.\n\nI feel it is better to include it wherever possible.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sun, 2 May 2021 16:00:59 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "Le dim. 2 mai 2021 à 18:31, vignesh C <vignesh21@gmail.com> a écrit :\n\n> On Sun, May 2, 2021 at 4:27 AM Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n> >\n> > On 2021-May-01, Bharath Rupireddy wrote:\n> >\n> > > IMO, it's not good to change the function API just for showing up\n> > > parse_position (which is there for cosmetic reasons I feel) in an error\n> > > which actually has the option name clearly mentioned in the error\n> message.\n> >\n> > Why not? We've done it before, I'm sure you can find examples in the\n> > git log. The function API is not that critical -- these functions are\n> > mostly only called from ProcessUtility and friends.\n>\n> I feel it is better to include it wherever possible.\n>\n\n+1\n\n>\n\nLe dim. 2 mai 2021 à 18:31, vignesh C <vignesh21@gmail.com> a écrit :On Sun, May 2, 2021 at 4:27 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-May-01, Bharath Rupireddy wrote:\n>\n> > IMO, it's not good to change the function API just for showing up\n> > parse_position (which is there for cosmetic reasons I feel) in an error\n> > which actually has the option name clearly mentioned in the error message.\n>\n> Why not?  We've done it before, I'm sure you can find examples in the\n> git log.  The function API is not that critical -- these functions are\n> mostly only called from ProcessUtility and friends.\n\nI feel it is better to include it wherever possible.+1", "msg_date": "Sun, 2 May 2021 19:42:54 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Sat, May 1, 2021 at 10:54 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-May-01, vignesh C wrote:\n>\n> > On Thu, Apr 29, 2021 at 10:44 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > >\n> > > On 2021-Apr-29, vignesh C wrote:\n> > >\n> > > > Thanks for the comments, please find the attached v3 patch which has\n> > > > the change for the first part.\n> > >\n> > > Looks good to me. I would only add parser_errposition() to the few\n> > > error sites missing that.\n> >\n> > I have not included parser_errposition as ParseState was not available\n> > for these errors.\n>\n> Yeah, it's tough to do that in a few of those such as validator\n> functions, and I don't think we'd want to do that. However there are\n> some cases where we can easily add the parsestate as an argument -- for\n> example CreatePublication can get it in ProcessUtilitySlow and pass it\n> down to parse_publication_options; likewise for ExecuteDoStmt. I didn't\n> check other places.\n\nThanks for the comments. I have changed in most of the places except\nfor a few places like plugin functions, internal commands and changes\nthat required changing more levels of function callers. Attached patch\nhas the changes for the same.\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Sun, 2 May 2021 20:42:30 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Sat, May 1, 2021 at 9:02 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Sat, May 1, 2021 at 7:25 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > I'm not attaching above one line change as a patch, maybe Vignesh can\n> > > > merge this into the main patch.\n> >\n> > Thanks for the comments. I have merged the change into the attached patch.\n> > Thoughts?\n>\n> Thanks! v4 basically LGTM. Can we park this in the current commitfest\n> if not done already?\n>\n> Upon looking at the number of places where we have the \"option \\\"%s\\\"\n> specified more than once\" error, I, now strongly feel that we should\n> use goto duplicate_error approach like in compute_common_attribute, so\n> that we will have only one ereport(ERROR. We can change it in\n> following files: copy.c, dbcommands.c, extension.c,\n> compute_function_attributes, sequence.c, subscriptioncmds.c,\n> typecmds.c, user.c, walsender.c, pgoutput.c. This will reduce the LOC\n> greatly.\n>\n> Thoughts?\n\nI have made the changes for this, I have posted the same in the v5\npatch posted in my earlier mail.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sun, 2 May 2021 20:44:25 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Sun, May 2, 2021 at 8:44 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Sat, May 1, 2021 at 9:02 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Sat, May 1, 2021 at 7:25 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > > I'm not attaching above one line change as a patch, maybe Vignesh can\n> > > > > merge this into the main patch.\n> > >\n> > > Thanks for the comments. I have merged the change into the attached patch.\n> > > Thoughts?\n> >\n> > Thanks! v4 basically LGTM. Can we park this in the current commitfest\n> > if not done already?\n> >\n> > Upon looking at the number of places where we have the \"option \\\"%s\\\"\n> > specified more than once\" error, I, now strongly feel that we should\n> > use goto duplicate_error approach like in compute_common_attribute, so\n> > that we will have only one ereport(ERROR. We can change it in\n> > following files: copy.c, dbcommands.c, extension.c,\n> > compute_function_attributes, sequence.c, subscriptioncmds.c,\n> > typecmds.c, user.c, walsender.c, pgoutput.c. This will reduce the LOC\n> > greatly.\n> >\n> > Thoughts?\n>\n> I have made the changes for this, I have posted the same in the v5\n> patch posted in my earlier mail.\n\nThanks! The v5 patch looks good to me. Let's see if all agree on the\ngoto duplicate_error approach which could reduce the LOC by ~80.\n\nI don't see it in the current commitfest, can we park it there so that\nthe patch will get tested on cfbot systems?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 3 May 2021 12:08:06 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Mon, May 3, 2021 at 12:08 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Sun, May 2, 2021 at 8:44 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Sat, May 1, 2021 at 9:02 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > On Sat, May 1, 2021 at 7:25 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > > > I'm not attaching above one line change as a patch, maybe Vignesh can\n> > > > > > merge this into the main patch.\n> > > >\n> > > > Thanks for the comments. I have merged the change into the attached patch.\n> > > > Thoughts?\n> > >\n> > > Thanks! v4 basically LGTM. Can we park this in the current commitfest\n> > > if not done already?\n> > >\n> > > Upon looking at the number of places where we have the \"option \\\"%s\\\"\n> > > specified more than once\" error, I, now strongly feel that we should\n> > > use goto duplicate_error approach like in compute_common_attribute, so\n> > > that we will have only one ereport(ERROR. We can change it in\n> > > following files: copy.c, dbcommands.c, extension.c,\n> > > compute_function_attributes, sequence.c, subscriptioncmds.c,\n> > > typecmds.c, user.c, walsender.c, pgoutput.c. This will reduce the LOC\n> > > greatly.\n> > >\n> > > Thoughts?\n> >\n> > I have made the changes for this, I have posted the same in the v5\n> > patch posted in my earlier mail.\n>\n> Thanks! The v5 patch looks good to me. Let's see if all agree on the\n> goto duplicate_error approach which could reduce the LOC by ~80.\n\nI think the \"goto duplicate_error\" approach looks good, it avoids\nduplicating the same error code multiple times.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 3 May 2021 13:41:21 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Mon, May 3, 2021 at 12:08 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Sun, May 2, 2021 at 8:44 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Sat, May 1, 2021 at 9:02 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > On Sat, May 1, 2021 at 7:25 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > > > I'm not attaching above one line change as a patch, maybe Vignesh can\n> > > > > > merge this into the main patch.\n> > > >\n> > > > Thanks for the comments. I have merged the change into the attached patch.\n> > > > Thoughts?\n> > >\n> > > Thanks! v4 basically LGTM. Can we park this in the current commitfest\n> > > if not done already?\n> > >\n> > > Upon looking at the number of places where we have the \"option \\\"%s\\\"\n> > > specified more than once\" error, I, now strongly feel that we should\n> > > use goto duplicate_error approach like in compute_common_attribute, so\n> > > that we will have only one ereport(ERROR. We can change it in\n> > > following files: copy.c, dbcommands.c, extension.c,\n> > > compute_function_attributes, sequence.c, subscriptioncmds.c,\n> > > typecmds.c, user.c, walsender.c, pgoutput.c. This will reduce the LOC\n> > > greatly.\n> > >\n> > > Thoughts?\n> >\n> > I have made the changes for this, I have posted the same in the v5\n> > patch posted in my earlier mail.\n>\n> Thanks! The v5 patch looks good to me. Let's see if all agree on the\n> goto duplicate_error approach which could reduce the LOC by ~80.\n>\n> I don't see it in the current commitfest, can we park it there so that\n> the patch will get tested on cfbot systems?\n\nI have added an entry in commitfest:\nhttps://commitfest.postgresql.org/33/3103/\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 3 May 2021 18:41:15 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Mon, May 3, 2021 at 1:41 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, May 3, 2021 at 12:08 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Sun, May 2, 2021 at 8:44 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Sat, May 1, 2021 at 9:02 PM Bharath Rupireddy\n> > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > >\n> > > > On Sat, May 1, 2021 at 7:25 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > > > > I'm not attaching above one line change as a patch, maybe Vignesh can\n> > > > > > > merge this into the main patch.\n> > > > >\n> > > > > Thanks for the comments. I have merged the change into the attached patch.\n> > > > > Thoughts?\n> > > >\n> > > > Thanks! v4 basically LGTM. Can we park this in the current commitfest\n> > > > if not done already?\n> > > >\n> > > > Upon looking at the number of places where we have the \"option \\\"%s\\\"\n> > > > specified more than once\" error, I, now strongly feel that we should\n> > > > use goto duplicate_error approach like in compute_common_attribute, so\n> > > > that we will have only one ereport(ERROR. We can change it in\n> > > > following files: copy.c, dbcommands.c, extension.c,\n> > > > compute_function_attributes, sequence.c, subscriptioncmds.c,\n> > > > typecmds.c, user.c, walsender.c, pgoutput.c. This will reduce the LOC\n> > > > greatly.\n> > > >\n> > > > Thoughts?\n> > >\n> > > I have made the changes for this, I have posted the same in the v5\n> > > patch posted in my earlier mail.\n> >\n> > Thanks! The v5 patch looks good to me. Let's see if all agree on the\n> > goto duplicate_error approach which could reduce the LOC by ~80.\n>\n> I think the \"goto duplicate_error\" approach looks good, it avoids\n> duplicating the same error code multiple times.\n\nThanks. I will mark the v5 patch \"ready for committer\" if no one has comments.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 4 May 2021 12:49:23 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "> > > Thanks! The v5 patch looks good to me. Let's see if all agree on the\r\n> > > goto duplicate_error approach which could reduce the LOC by ~80.\r\n> >\r\n> > I think the \"goto duplicate_error\" approach looks good, it avoids\r\n> > duplicating the same error code multiple times.\r\n> \r\n> Thanks. I will mark the v5 patch \"ready for committer\" if no one has comments.\r\n\r\nHi,\r\n\r\nI looked into the patch and noticed a minor thing.\r\n\r\n+\treturn;\t\t\t\t/* keep compiler quiet */\r\n }\r\n\r\nI think we do not need the comment here.\r\nThe compiler seems not require \"return\" at the end of function\r\nwhen function's return type is VOID.\r\n\r\nIn addition, it seems better to remove these \"return;\" like what\r\ncommit \"3974c4\" did.\r\n\r\nBest regards,\r\nhouzj\r\n", "msg_date": "Sat, 8 May 2021 06:31:22 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Sat, May 8, 2021 at 12:01 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> > > > Thanks! The v5 patch looks good to me. Let's see if all agree on the\n> > > > goto duplicate_error approach which could reduce the LOC by ~80.\n> > >\n> > > I think the \"goto duplicate_error\" approach looks good, it avoids\n> > > duplicating the same error code multiple times.\n> >\n> > Thanks. I will mark the v5 patch \"ready for committer\" if no one has comments.\n>\n> Hi,\n>\n> I looked into the patch and noticed a minor thing.\n>\n> + return; /* keep compiler quiet */\n> }\n>\n> I think we do not need the comment here.\n> The compiler seems not require \"return\" at the end of function\n> when function's return type is VOID.\n>\n> In addition, it seems better to remove these \"return;\" like what\n> commit \"3974c4\" did.\n\nIt looks like that commit removed the plain return statements for a\nvoid returning functions. I see in the code that there are return\nstatements that are there right after ereport(ERROR, just to keep the\ncompiler quiet. Here in this patch also, we have return; statements\nafter ereport(ERROR, for void returning functions. I'm not sure\nremoving them would cause some compiler warnings on some platforms\nwith some other compilers. If we're not sure, I'm okay to keep those\nreturn; statements. Thoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 8 May 2021 14:20:34 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Sat, May 8, 2021 at 2:20 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Sat, May 8, 2021 at 12:01 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > > > > Thanks! The v5 patch looks good to me. Let's see if all agree on the\n> > > > > goto duplicate_error approach which could reduce the LOC by ~80.\n> > > >\n> > > > I think the \"goto duplicate_error\" approach looks good, it avoids\n> > > > duplicating the same error code multiple times.\n> > >\n> > > Thanks. I will mark the v5 patch \"ready for committer\" if no one has comments.\n> >\n> > Hi,\n> >\n> > I looked into the patch and noticed a minor thing.\n> >\n> > + return; /* keep compiler quiet */\n> > }\n> >\n> > I think we do not need the comment here.\n> > The compiler seems not require \"return\" at the end of function\n> > when function's return type is VOID.\n> >\n> > In addition, it seems better to remove these \"return;\" like what\n> > commit \"3974c4\" did.\n>\n> It looks like that commit removed the plain return statements for a\n> void returning functions. I see in the code that there are return\n> statements that are there right after ereport(ERROR, just to keep the\n> compiler quiet. Here in this patch also, we have return; statements\n> after ereport(ERROR, for void returning functions. I'm not sure\n> removing them would cause some compiler warnings on some platforms\n> with some other compilers. If we're not sure, I'm okay to keep those\n> return; statements. Thoughts?\n\nI felt we could retain the return statement and remove the comments.\nIf you are ok with that I will modify and post a patch for it.\nThoughts?\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sat, 8 May 2021 19:06:42 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Sat, May 8, 2021 at 7:06 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Sat, May 8, 2021 at 2:20 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Sat, May 8, 2021 at 12:01 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > > > > Thanks! The v5 patch looks good to me. Let's see if all agree on the\n> > > > > > goto duplicate_error approach which could reduce the LOC by ~80.\n> > > > >\n> > > > > I think the \"goto duplicate_error\" approach looks good, it avoids\n> > > > > duplicating the same error code multiple times.\n> > > >\n> > > > Thanks. I will mark the v5 patch \"ready for committer\" if no one has comments.\n> > >\n> > > Hi,\n> > >\n> > > I looked into the patch and noticed a minor thing.\n> > >\n> > > + return; /* keep compiler quiet */\n> > > }\n> > >\n> > > I think we do not need the comment here.\n> > > The compiler seems not require \"return\" at the end of function\n> > > when function's return type is VOID.\n> > >\n> > > In addition, it seems better to remove these \"return;\" like what\n> > > commit \"3974c4\" did.\n> >\n> > It looks like that commit removed the plain return statements for a\n> > void returning functions. I see in the code that there are return\n> > statements that are there right after ereport(ERROR, just to keep the\n> > compiler quiet. Here in this patch also, we have return; statements\n> > after ereport(ERROR, for void returning functions. I'm not sure\n> > removing them would cause some compiler warnings on some platforms\n> > with some other compilers. If we're not sure, I'm okay to keep those\n> > return; statements. Thoughts?\n>\n> I felt we could retain the return statement and remove the comments.\n> If you are ok with that I will modify and post a patch for it.\n> Thoughts?\n\nI would like to keep it as is i.e. both return statement and /* keep\ncompiler quiet */ comment. Having said that, it's better to leave it\nto the committer on whether to have the return statement at all.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 9 May 2021 18:09:38 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "> > > > > > > Thanks! The v5 patch looks good to me. Let's see if all\r\n> > > > > > > agree on the goto duplicate_error approach which could reduce\r\n> the LOC by ~80.\r\n> > > > > >\r\n> > > > > > I think the \"goto duplicate_error\" approach looks good, it\r\n> > > > > > avoids duplicating the same error code multiple times.\r\n> > > > >\r\n> > > > > Thanks. I will mark the v5 patch \"ready for committer\" if no one has\r\n> comments.\r\n> > > >\r\n> > > > Hi,\r\n> > > >\r\n> > > > I looked into the patch and noticed a minor thing.\r\n> > > >\r\n> > > > + return; /* keep compiler quiet */\r\n> > > > }\r\n> > > >\r\n> > > > I think we do not need the comment here.\r\n> > > > The compiler seems not require \"return\" at the end of function\r\n> > > > when function's return type is VOID.\r\n> > > >\r\n> > > > In addition, it seems better to remove these \"return;\" like what\r\n> > > > commit \"3974c4\" did.\r\n> > >\r\n> > > It looks like that commit removed the plain return statements for a\r\n> > > void returning functions. I see in the code that there are return\r\n> > > statements that are there right after ereport(ERROR, just to keep\r\n> > > the compiler quiet. Here in this patch also, we have return;\r\n> > > statements after ereport(ERROR, for void returning functions. I'm\r\n> > > not sure removing them would cause some compiler warnings on some\r\n> > > platforms with some other compilers. If we're not sure, I'm okay to\r\n> > > keep those return; statements. Thoughts?\r\n> >\r\n> > I felt we could retain the return statement and remove the comments.\r\n> > If you are ok with that I will modify and post a patch for it.\r\n> > Thoughts?\r\n> \r\n> I would like to keep it as is i.e. both return statement and /* keep compiler\r\n> quiet */ comment. Having said that, it's better to leave it to the committer on\r\n> whether to have the return statement at all.\r\n\r\nYes, it's better to leave it to the committer on whether to have the \"return;\".\r\nBut, I think at least removing \"return;\" which is at the *end* of the function will not cause any warning.\r\nSuch as:\r\n\r\n+ return; /* keep compiler quiet */\r\n}\r\n\r\nSo, I'd vote for at least removing the comment \" keep the compiler quiet \".\r\n\r\nBest regards,\r\nhouzj\r\n", "msg_date": "Mon, 10 May 2021 00:30:09 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Mon, May 10, 2021 at 6:00 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> > > > > > > > Thanks! The v5 patch looks good to me. Let's see if all\n> > > > > > > > agree on the goto duplicate_error approach which could reduce\n> > the LOC by ~80.\n> > > > > > >\n> > > > > > > I think the \"goto duplicate_error\" approach looks good, it\n> > > > > > > avoids duplicating the same error code multiple times.\n> > > > > >\n> > > > > > Thanks. I will mark the v5 patch \"ready for committer\" if no one has\n> > comments.\n> > > > >\n> > > > > Hi,\n> > > > >\n> > > > > I looked into the patch and noticed a minor thing.\n> > > > >\n> > > > > + return; /* keep compiler quiet */\n> > > > > }\n> > > > >\n> > > > > I think we do not need the comment here.\n> > > > > The compiler seems not require \"return\" at the end of function\n> > > > > when function's return type is VOID.\n> > > > >\n> > > > > In addition, it seems better to remove these \"return;\" like what\n> > > > > commit \"3974c4\" did.\n> > > >\n> > > > It looks like that commit removed the plain return statements for a\n> > > > void returning functions. I see in the code that there are return\n> > > > statements that are there right after ereport(ERROR, just to keep\n> > > > the compiler quiet. Here in this patch also, we have return;\n> > > > statements after ereport(ERROR, for void returning functions. I'm\n> > > > not sure removing them would cause some compiler warnings on some\n> > > > platforms with some other compilers. If we're not sure, I'm okay to\n> > > > keep those return; statements. Thoughts?\n> > >\n> > > I felt we could retain the return statement and remove the comments.\n> > > If you are ok with that I will modify and post a patch for it.\n> > > Thoughts?\n> >\n> > I would like to keep it as is i.e. both return statement and /* keep compiler\n> > quiet */ comment. Having said that, it's better to leave it to the committer on\n> > whether to have the return statement at all.\n>\n> Yes, it's better to leave it to the committer on whether to have the \"return;\".\n> But, I think at least removing \"return;\" which is at the *end* of the function will not cause any warning.\n> Such as:\n>\n> + return; /* keep compiler quiet */\n> }\n>\n> So, I'd vote for at least removing the comment \" keep the compiler quiet \".\n\nThat sounds fine to me, Attached v6 patch which has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Mon, 10 May 2021 18:57:47 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On 2021-May-10, vignesh C wrote:\n\n> That sounds fine to me, Attached v6 patch which has the changes for the same.\n\nWhat about defining a function (maybe a static inline function in\ndefrem.h) that is marked noreturn and receives the DefElem and\noptionally pstate, and throws the error? I think that would avoid the\npatch's need to have half a dozen copies of the \"duplicate_error:\" label\nand ereport stanza.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Mon, 10 May 2021 17:17:14 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Tue, May 11, 2021 at 2:47 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-May-10, vignesh C wrote:\n>\n> > That sounds fine to me, Attached v6 patch which has the changes for the same.\n>\n> What about defining a function (maybe a static inline function in\n> defrem.h) that is marked noreturn and receives the DefElem and\n> optionally pstate, and throws the error? I think that would avoid the\n> patch's need to have half a dozen copies of the \"duplicate_error:\" label\n> and ereport stanza.\n\n+1 to have a static inline function which just reports the error.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 11 May 2021 09:38:22 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Tue, May 11, 2021 at 2:47 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-May-10, vignesh C wrote:\n>\n> > That sounds fine to me, Attached v6 patch which has the changes for the same.\n>\n> What about defining a function (maybe a static inline function in\n> defrem.h) that is marked noreturn and receives the DefElem and\n> optionally pstate, and throws the error? I think that would avoid the\n> patch's need to have half a dozen copies of the \"duplicate_error:\" label\n> and ereport stanza.\n\nThanks for the comment, this reduces a significant amount of code.\nAttached patch has the changes incorporated.\n\nRegards,\nVignesh", "msg_date": "Tue, 11 May 2021 18:54:42 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Tue, May 11, 2021 at 6:54 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, May 11, 2021 at 2:47 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2021-May-10, vignesh C wrote:\n> >\n> > > That sounds fine to me, Attached v6 patch which has the changes for the same.\n> >\n> > What about defining a function (maybe a static inline function in\n> > defrem.h) that is marked noreturn and receives the DefElem and\n> > optionally pstate, and throws the error? I think that would avoid the\n> > patch's need to have half a dozen copies of the \"duplicate_error:\" label\n> > and ereport stanza.\n>\n> Thanks for the comment, this reduces a significant amount of code.\n\nYeah, the patch reduces more than 200 LOC which is a pretty good thing.\n\n 25 files changed, 239 insertions(+), 454 deletions(-)\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 11 May 2021 21:36:02 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "You can avoid duplicating the ereport like this:\n\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"option \\\"%s\\\" specified more than once\", defel->defname),\n+ parser ? parser_errposition(pstate, defel->location) : 0));\n\n... also, since e3a87b4991cc you can now elide the parens around the\nauxiliary function calls:\n\n+ ereport(ERROR,\n+ errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"option \\\"%s\\\" specified more than once\", defel->defname),\n+ parser ? parser_errposition(pstate, defel->location) : 0));\n\nPlease do add a pg_attribute_noreturn() decorator. I'm not sure if any\ncompilers will complain about the code flow if you have that, but I\nexpect many (all?) will if you don't.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Java is clearly an example of money oriented programming\" (A. Stepanov)\n\n\n", "msg_date": "Wed, 12 May 2021 19:28:23 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Thu, May 13, 2021 at 4:58 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> You can avoid duplicating the ereport like this:\n>\n> + ereport(ERROR,\n> + (errcode(ERRCODE_SYNTAX_ERROR),\n> + errmsg(\"option \\\"%s\\\" specified more than once\", defel->defname),\n> + parser ? parser_errposition(pstate, defel->location) : 0));\n>\n> ... also, since e3a87b4991cc you can now elide the parens around the\n> auxiliary function calls:\n>\n\nModified.\n\n> + ereport(ERROR,\n> + errcode(ERRCODE_SYNTAX_ERROR),\n> + errmsg(\"option \\\"%s\\\" specified more than once\", defel->defname),\n> + parser ? parser_errposition(pstate, defel->location) : 0));\n>\n> Please do add a pg_attribute_noreturn() decorator. I'm not sure if any\n> compilers will complain about the code flow if you have that, but I\n> expect many (all?) will if you don't.\n\nModified.\n\nThanks for the comments, Attached patch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Thu, 13 May 2021 20:09:13 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Thu, May 13, 2021 at 8:09 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, May 13, 2021 at 4:58 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n>\n> Thanks for the comments, Attached patch has the changes for the same.\n>\n\nThe Patch was not applying on Head, the attached patch is rebased on\ntop of Head.\n\nRegards,\nVignesh", "msg_date": "Wed, 30 Jun 2021 19:48:51 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Wed, Jun 30, 2021 at 7:48 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, May 13, 2021 at 8:09 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Thu, May 13, 2021 at 4:58 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > >\n> >\n> > Thanks for the comments, Attached patch has the changes for the same.\n> >\n>\n> The Patch was not applying on Head, the attached patch is rebased on\n> top of Head.\n\nThe patch was not applying on the head because of the recent commit\n\"8aafb02616753f5c6c90bbc567636b73c0cbb9d4\", attached patch which is\nrebased on HEAD.\n\nRegards,\nVignesh", "msg_date": "Tue, 6 Jul 2021 20:38:35 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "> On 6 Jul 2021, at 17:08, vignesh C <vignesh21@gmail.com> wrote:\n\n> The patch was not applying on the head because of the recent commit\n> \"8aafb02616753f5c6c90bbc567636b73c0cbb9d4\", attached patch which is\n> rebased on HEAD.\n\nI sort of like the visual cue of seeing ereport(ERROR .. since it makes it\nclear it will break execution then and there, this will require a lookup for\nanyone who don't know the function by heart. That being said, reducing\nduplicated boilerplate has clear value and this reduce the risk of introducing\nstrings which are complicated to translate. On the whole I think this is a net\nwin, and the patch looks pretty good.\n\n- DefElem *defel = (DefElem *) lfirst(option);\n+ defel = (DefElem *) lfirst(option);\nAny particular reason to include this in the patch?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 7 Jul 2021 22:22:09 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Thu, Jul 8, 2021 at 1:52 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 6 Jul 2021, at 17:08, vignesh C <vignesh21@gmail.com> wrote:\n>\n> > The patch was not applying on the head because of the recent commit\n> > \"8aafb02616753f5c6c90bbc567636b73c0cbb9d4\", attached patch which is\n> > rebased on HEAD.\n>\n> I sort of like the visual cue of seeing ereport(ERROR .. since it makes it\n> clear it will break execution then and there, this will require a lookup for\n> anyone who don't know the function by heart. That being said, reducing\n> duplicated boilerplate has clear value and this reduce the risk of introducing\n> strings which are complicated to translate. On the whole I think this is a net\n> win, and the patch looks pretty good.\n>\n> - DefElem *defel = (DefElem *) lfirst(option);\n> + defel = (DefElem *) lfirst(option);\n> Any particular reason to include this in the patch?\n>\n\nThanks for identifying this, this change is not needed, this was\nrequired in my previous solution based on goto label. As we have made\nthese changes into a common function. This change is not required,\nAttached v9 patch which removes these changes.\n\nRegards,\nVignesh", "msg_date": "Thu, 8 Jul 2021 19:10:36 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Thu, 8 Jul 2021 at 14:40, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, Jul 8, 2021 at 1:52 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >\n> > I sort of like the visual cue of seeing ereport(ERROR .. since it makes it\n> > clear it will break execution then and there, this will require a lookup for\n> > anyone who don't know the function by heart. That being said, reducing\n> > duplicated boilerplate has clear value and this reduce the risk of introducing\n> > strings which are complicated to translate. On the whole I think this is a net\n> > win, and the patch looks pretty good.\n> >\n\nBikeshedding the function name, there are several similar examples in\nthe existing code, but the closest analogs are probably\nerrorMissingColumn() and errorMissingRTE(). So I think\nerrorConflictingDefElem() would be better, since it's slightly more\nobviously an error.\n\nAlso, I don't think this function should be marked inline -- using a\nnormal function ought to help make the compiled code smaller.\n\nA bigger problem is that the patch replaces about 100 instances of the\nerror \"conflicting or redundant options\" with \"option \\\"%s\\\" specified\nmore than once\", but that's not always the appropriate thing to do.\nFor example, in the walsender code, the error isn't necessarily due to\nthe option being specified more than once.\n\nAlso, there are cases where def->defname isn't actually the name of\nthe option specified, so including it in the error is misleading. For\nexample:\n\nCREATE OR REPLACE FUNCTION foo() RETURNS int\nAS $$ SELECT 1 $$ STABLE IMMUTABLE;\n\nERROR: option \"volatility\" specified more than once\nLINE 2: AS $$ SELECT 1 $$ STABLE IMMUTABLE;\n ^\n\nand in this case \"volatility\" is an internal string, so it won't get translated.\n\nI'm inclined to think that it isn't worth the effort trying to\ndistinguish between conflicting options, options specified more than\nonce and faked-up options that weren't really specified. If we just\nmake errorConflictingDefElem() report \"conflicting or redundant\noptions\", then it's much easier to update calling code without making\nmistakes. The benefit then comes from the reduced code size and the\nfact that the patch includes pstate in more places, so the\nparser_errposition() indicator helps the user identify the problem.\n\nIn file_fdw_validator(), where there is no pstate, it's already using\n\"specified more than once\" as a hint to clarify the \"conflicting or\nredundant options\" error, so I think we should leave that alone.\n\nRegards,\nDean\n\n\n", "msg_date": "Sat, 10 Jul 2021 11:44:18 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Sat, 10 Jul 2021 at 11:44, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> I'm inclined to think that it isn't worth the effort trying to\n> distinguish between conflicting options, options specified more than\n> once and faked-up options that weren't really specified. If we just\n> make errorConflictingDefElem() report \"conflicting or redundant\n> options\", then it's much easier to update calling code without making\n> mistakes. The benefit then comes from the reduced code size and the\n> fact that the patch includes pstate in more places, so the\n> parser_errposition() indicator helps the user identify the problem.\n>\n> In file_fdw_validator(), where there is no pstate, it's already using\n> \"specified more than once\" as a hint to clarify the \"conflicting or\n> redundant options\" error, so I think we should leave that alone.\n>\n\nAnother possibility would be to pass the option list to\nerrorConflictingDefElem() and it could work out whether or not to\ninclude the \"option \\%s\\\" specified more than once\" hint, since that\nhint probably is useful, and working out whether to include it is\nprobably less error-prone if it's done there.\n\nRegards,\nDean\n\n\n", "msg_date": "Sat, 10 Jul 2021 12:00:49 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Sat, Jul 10, 2021 at 4:14 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Thu, 8 Jul 2021 at 14:40, vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Thu, Jul 8, 2021 at 1:52 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > >\n> > > I sort of like the visual cue of seeing ereport(ERROR .. since it makes it\n> > > clear it will break execution then and there, this will require a lookup for\n> > > anyone who don't know the function by heart. That being said, reducing\n> > > duplicated boilerplate has clear value and this reduce the risk of introducing\n> > > strings which are complicated to translate. On the whole I think this is a net\n> > > win, and the patch looks pretty good.\n> > >\n>\n> Bikeshedding the function name, there are several similar examples in\n> the existing code, but the closest analogs are probably\n> errorMissingColumn() and errorMissingRTE(). So I think\n> errorConflictingDefElem() would be better, since it's slightly more\n> obviously an error.\n>\n\nOk, I will change it to keep it similar.\n\n> Also, I don't think this function should be marked inline -- using a\n> normal function ought to help make the compiled code smaller.\n>\n\ninline instructs the compiler to attempt to embed the function content\ninto the calling code instead of executing an actual call. I think we\nshould keep it inline to reduce the function call.\n\n> A bigger problem is that the patch replaces about 100 instances of the\n> error \"conflicting or redundant options\" with \"option \\\"%s\\\" specified\n> more than once\", but that's not always the appropriate thing to do.\n> For example, in the walsender code, the error isn't necessarily due to\n> the option being specified more than once.\n>\n\nThis patch intended to change \"conflicting or redundant options\" to\n\"option \\\"%s\\\" specified more than once\" only in case that error is\nfor option specified more than once. This change is not required. I\nwill remove it.\n\n> Also, there are cases where def->defname isn't actually the name of\n> the option specified, so including it in the error is misleading. For\n> example:\n>\n> CREATE OR REPLACE FUNCTION foo() RETURNS int\n> AS $$ SELECT 1 $$ STABLE IMMUTABLE;\n>\n> ERROR: option \"volatility\" specified more than once\n> LINE 2: AS $$ SELECT 1 $$ STABLE IMMUTABLE;\n> ^\n>\n> and in this case \"volatility\" is an internal string, so it won't get translated.\n>\n> I'm inclined to think that it isn't worth the effort trying to\n> distinguish between conflicting options, options specified more than\n> once and faked-up options that weren't really specified. If we just\n> make errorConflictingDefElem() report \"conflicting or redundant\n> options\", then it's much easier to update calling code without making\n> mistakes. The benefit then comes from the reduced code size and the\n> fact that the patch includes pstate in more places, so the\n> parser_errposition() indicator helps the user identify the problem.\n>\n> In file_fdw_validator(), where there is no pstate, it's already using\n> \"specified more than once\" as a hint to clarify the \"conflicting or\n> redundant options\" error, so I think we should leave that alone.\n\nThis patch intended to change \"conflicting or redundant options\" to\n\"option \\\"%s\\\" specified more than once\" only in case that error is\nfor option specified more than once. Thanks for pointing out a few\nplaces where the actual error \"conflicting or redundant options\"\nshould be left as it is. I will post a new patch which will remove the\nconflicting options error scenarios, which were not targeted in this\npatch.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sat, 10 Jul 2021 21:32:50 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Sat, Jul 10, 2021 at 4:31 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Sat, 10 Jul 2021 at 11:44, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> >\n> > I'm inclined to think that it isn't worth the effort trying to\n> > distinguish between conflicting options, options specified more than\n> > once and faked-up options that weren't really specified. If we just\n> > make errorConflictingDefElem() report \"conflicting or redundant\n> > options\", then it's much easier to update calling code without making\n> > mistakes. The benefit then comes from the reduced code size and the\n> > fact that the patch includes pstate in more places, so the\n> > parser_errposition() indicator helps the user identify the problem.\n> >\n> > In file_fdw_validator(), where there is no pstate, it's already using\n> > \"specified more than once\" as a hint to clarify the \"conflicting or\n> > redundant options\" error, so I think we should leave that alone.\n> >\n>\n> Another possibility would be to pass the option list to\n> errorConflictingDefElem() and it could work out whether or not to\n> include the \"option \\%s\\\" specified more than once\" hint, since that\n> hint probably is useful, and working out whether to include it is\n> probably less error-prone if it's done there.\n\nI'm planning to handle conflicting errors separately after this\ncurrent work is done, once the patch is changed to have just the valid\nscenarios(removing the scenarios you have pointed out) existing\nfunction can work as is without any changes. Thoughts?\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sat, 10 Jul 2021 22:38:53 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Sat, 10 Jul 2021 at 17:03, vignesh C <vignesh21@gmail.com> wrote:\n>\n> > Also, I don't think this function should be marked inline -- using a\n> > normal function ought to help make the compiled code smaller.\n>\n> inline instructs the compiler to attempt to embed the function content\n> into the calling code instead of executing an actual call. I think we\n> should keep it inline to reduce the function call.\n\nHmm, I'd say that inline should generally be used sparingly, and only\nfor small functions that are called very often, to avoid the function\ncall overhead, and generate a faster and possibly smaller executable.\n(Though I think sometimes it can still be faster if the executable is\nlarger.)\n\nIn this case, it's a function that is only called under error\nconditions, so it's not commonly called, and we don't care so much\nabout performance when we're about to throw an error.\n\nAlso, if you look at an ereport() such as\n\n ereport(ERROR,\n errcode(ERRCODE_SYNTAX_ERROR),\n errmsg(\"conflicting or redundant options\"),\n parser_errposition(pstate, defel->location)));\n\nThis is a macro that's actually expanded into 5 separate function calls:\n\n - errstart() / errstart_cold()\n - errcode()\n - errmsg()\n - parser_errposition()\n - errfinish()\n\nso it's a non-trivial amount of code. Whereas, if it's not inlined, it\nbecomes just one function call at each call-site, making for smaller,\nfaster code in the typical case where an error is not being raised.\n\nOf course, it's possible the compiler might still decide to inline the\nfunction, if it thinks that's preferable. In some cases, we explicitly\nmark this type of function with pg_noinline, to avoid that, and reduce\ncode bloat where it's used in lots of small, fast functions (see, for\nexample, float_overflow_error()).\n\nIn general though, I think inline and noinline should be reserved for\nspecial cases where they give a clear, measurable benefit, and that in\ngeneral it's better to not mark the function and just let the compiler\ndecide.\n\nRegards,\nDean\n\n\n", "msg_date": "Sun, 11 Jul 2021 10:27:00 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Sat, 10 Jul 2021 at 18:09, vignesh C <vignesh21@gmail.com> wrote:\n>\n> I'm planning to handle conflicting errors separately after this\n> current work is done, once the patch is changed to have just the valid\n> scenarios(removing the scenarios you have pointed out) existing\n> function can work as is without any changes. Thoughts?\n\nAh OK, that might be reasonable. Perhaps, then errorDuplicateDefElem()\nand errorConflictingDefElem() would be better than what I originally\nsuggested.\n\nBTW, another case I spotted was this:\n\ncopy (select 1) to stdout csv csv header;\nERROR: option \"format\" specified more than once\nLINE 1: copy (select 1) to stdout csv csv header;\n ^\n\nwhich isn't good because there is no option called \"format\".\n\nRegards,\nDean\n\n\n", "msg_date": "Sun, 11 Jul 2021 10:52:56 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": ".On Sun, Jul 11, 2021 at 2:57 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Sat, 10 Jul 2021 at 17:03, vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > > Also, I don't think this function should be marked inline -- using a\n> > > normal function ought to help make the compiled code smaller.\n> >\n> > inline instructs the compiler to attempt to embed the function content\n> > into the calling code instead of executing an actual call. I think we\n> > should keep it inline to reduce the function call.\n>\n> Hmm, I'd say that inline should generally be used sparingly, and only\n> for small functions that are called very often, to avoid the function\n> call overhead, and generate a faster and possibly smaller executable.\n> (Though I think sometimes it can still be faster if the executable is\n> larger.)\n>\n> In this case, it's a function that is only called under error\n> conditions, so it's not commonly called, and we don't care so much\n> about performance when we're about to throw an error.\n>\n> Also, if you look at an ereport() such as\n>\n> ereport(ERROR,\n> errcode(ERRCODE_SYNTAX_ERROR),\n> errmsg(\"conflicting or redundant options\"),\n> parser_errposition(pstate, defel->location)));\n>\n> This is a macro that's actually expanded into 5 separate function calls:\n>\n> - errstart() / errstart_cold()\n> - errcode()\n> - errmsg()\n> - parser_errposition()\n> - errfinish()\n>\n> so it's a non-trivial amount of code. Whereas, if it's not inlined, it\n> becomes just one function call at each call-site, making for smaller,\n> faster code in the typical case where an error is not being raised.\n>\n> Of course, it's possible the compiler might still decide to inline the\n> function, if it thinks that's preferable. In some cases, we explicitly\n> mark this type of function with pg_noinline, to avoid that, and reduce\n> code bloat where it's used in lots of small, fast functions (see, for\n> example, float_overflow_error()).\n>\n> In general though, I think inline and noinline should be reserved for\n> special cases where they give a clear, measurable benefit, and that in\n> general it's better to not mark the function and just let the compiler\n> decide.\n\nOk, that makes sense. As this flow is mainly in the error part it is\nok. I will change it.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sun, 11 Jul 2021 18:59:23 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Sun, Jul 11, 2021 at 3:23 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Sat, 10 Jul 2021 at 18:09, vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > I'm planning to handle conflicting errors separately after this\n> > current work is done, once the patch is changed to have just the valid\n> > scenarios(removing the scenarios you have pointed out) existing\n> > function can work as is without any changes. Thoughts?\n>\n> Ah OK, that might be reasonable. Perhaps, then errorDuplicateDefElem()\n> and errorConflictingDefElem() would be better than what I originally\n> suggested.\n>\n> BTW, another case I spotted was this:\n>\n> copy (select 1) to stdout csv csv header;\n> ERROR: option \"format\" specified more than once\n> LINE 1: copy (select 1) to stdout csv csv header;\n> ^\n>\n\nThanks for your comments, I have made the changes for the same in the\nV10 patch attached.\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Mon, 12 Jul 2021 22:09:22 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Mon, 12 Jul 2021 at 17:39, vignesh C <vignesh21@gmail.com> wrote:\n>\n> Thanks for your comments, I have made the changes for the same in the\n> V10 patch attached.\n> Thoughts?\n>\n\nI'm still not happy about changing so many error messages.\n\nSome of the changes might be OK, but aren't strictly necessary. For example:\n\n COPY x from stdin (force_not_null (a), force_not_null (b));\n-ERROR: conflicting or redundant options\n+ERROR: option \"force_not_null\" specified more than once\n LINE 1: COPY x from stdin (force_not_null (a), force_not_null (b));\n ^\n\nI actually prefer the original primary error message, for consistency\nwith other similar cases, and I think the error position indicator is\nsufficient to identify the problem. If it were to include the\n\"specified more than once\" text, I would put that in DETAIL.\n\nOther changes are wrong though. For example:\n\n COPY x from stdin (format CSV, FORMAT CSV);\n-ERROR: conflicting or redundant options\n+ERROR: redundant options specified\n LINE 1: COPY x from stdin (format CSV, FORMAT CSV);\n ^\n\nThe problem here is that the code that throws this error throws the\nsame error if the second format is different, which would make it a\nconflicting option, not a redundant one. And I don't think we should\nadd more code to test whether it's conflicting or redundant, so again,\nI think we should just keep the original error message.\n\nSimilarly, this error is wrong:\n\nCREATE OR REPLACE FUNCTION foo() RETURNS int AS $$ SELECT 1 $$ STABLE IMMUTABLE;\nERROR: redundant options specified\nLINE 1: ...NCTION foo() RETURNS int AS $$ SELECT 1 $$ STABLE IMMUTABLE;\n ^\n\nAnd even this error:\n\nCREATE OR REPLACE FUNCTION foo() RETURNS int AS $$ SELECT 1 $$ STRICT STRICT;\nERROR: redundant options specified\nLINE 1: ... FUNCTION foo() RETURNS int AS $$ SELECT 1 $$ STRICT STRICT;\n ^\n\nwhich looks OK, is actually problematic because the same code also\nhandles the alternate syntax, which leads to this (which is now wrong\nbecause it's conflicting not redundant):\n\nCREATE OR REPLACE FUNCTION foo() RETURNS int AS $$ SELECT 1 $$ STRICT\nCALLED ON NULL INPUT;\nERROR: redundant options specified\nLINE 1: ...NCTION foo() RETURNS int AS $$ SELECT 1 $$ STRICT CALLED ON ...\n ^\n\nThe problem is it's actually quite hard to decide in each case whether\nthe option is redundant or conflicting. Sometimes, it might look\nobvious in the code, but actually be much more subtle, due to an\nearlier transformation of the grammar. Likewise redundant doesn't\nnecessarily mean literally specified more than once.\n\nAlso, most of these don't have regression test cases, and I'm very\nreluctant to change them without proper testing, and that would make\nthe patch much bigger. To me, this patch is already attempting to\nchange too much in one go, which is causing problems.\n\nSo I suggest a more incremental approach, starting by keeping the\noriginal error message, but improving it where possible with the error\nposition. Then maybe move on to look at specific cases that can be\nfurther improved with additional detail (keeping the same primary\nerror message, for consistency).\n\nHere is an updated version, following that approach. It does the following:\n\n1). Keeps the same primary error message (\"conflicting or redundant\noptions\") in all cases.\n\n2). Uses errorConflictingDefElem() to throw it, to ensure consistency\nand reduce the executable size.\n\n3). Includes your enhancements to make the ParseState available in\nmore places, so that the error position indicator is shown to indicate\nthe cause of the error.\n\nIMO, this makes for a much safer incremental change, that is more committable.\n\nAs it turns out, there are 110 cases of this error that now use\nerrorConflictingDefElem(), and of those, just 10 (in 3 functions)\ndon't have a ParseState readily available to them:\n\n- ATExecSetIdentity()\n- parse_output_parameters() x5\n- parseCreateReplSlotOptions() x4\n\nIt might be possible to improve those (and possibly some of the others\ntoo) by adding some appropriate DETAIL to the error, but as I said, I\nsuggest doing that in a separate follow-on patch, and only with\ncareful analysis and testing of each case.\n\nAs it stands, the improvements from (3) seem quite worthwhile. Also,\nthe patch saves a couple of hundred lines of code, and for me an\noptimised executable is around 30 kB smaller, which is more than I\nexpected.\n\nThoughts?\n\nRegards,\nDean", "msg_date": "Tue, 13 Jul 2021 11:54:54 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Tue, Jul 13, 2021 at 4:25 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Mon, 12 Jul 2021 at 17:39, vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Thanks for your comments, I have made the changes for the same in the\n> > V10 patch attached.\n> > Thoughts?\n> >\n>\n> I'm still not happy about changing so many error messages.\n>\n> Some of the changes might be OK, but aren't strictly necessary. For example:\n>\n> COPY x from stdin (force_not_null (a), force_not_null (b));\n> -ERROR: conflicting or redundant options\n> +ERROR: option \"force_not_null\" specified more than once\n> LINE 1: COPY x from stdin (force_not_null (a), force_not_null (b));\n> ^\n>\n> I actually prefer the original primary error message, for consistency\n> with other similar cases, and I think the error position indicator is\n> sufficient to identify the problem. If it were to include the\n> \"specified more than once\" text, I would put that in DETAIL.\n>\n> Other changes are wrong though. For example:\n>\n> COPY x from stdin (format CSV, FORMAT CSV);\n> -ERROR: conflicting or redundant options\n> +ERROR: redundant options specified\n> LINE 1: COPY x from stdin (format CSV, FORMAT CSV);\n> ^\n>\n> The problem here is that the code that throws this error throws the\n> same error if the second format is different, which would make it a\n> conflicting option, not a redundant one. And I don't think we should\n> add more code to test whether it's conflicting or redundant, so again,\n> I think we should just keep the original error message.\n>\n> Similarly, this error is wrong:\n>\n> CREATE OR REPLACE FUNCTION foo() RETURNS int AS $$ SELECT 1 $$ STABLE IMMUTABLE;\n> ERROR: redundant options specified\n> LINE 1: ...NCTION foo() RETURNS int AS $$ SELECT 1 $$ STABLE IMMUTABLE;\n> ^\n>\n> And even this error:\n>\n> CREATE OR REPLACE FUNCTION foo() RETURNS int AS $$ SELECT 1 $$ STRICT STRICT;\n> ERROR: redundant options specified\n> LINE 1: ... FUNCTION foo() RETURNS int AS $$ SELECT 1 $$ STRICT STRICT;\n> ^\n>\n> which looks OK, is actually problematic because the same code also\n> handles the alternate syntax, which leads to this (which is now wrong\n> because it's conflicting not redundant):\n>\n> CREATE OR REPLACE FUNCTION foo() RETURNS int AS $$ SELECT 1 $$ STRICT\n> CALLED ON NULL INPUT;\n> ERROR: redundant options specified\n> LINE 1: ...NCTION foo() RETURNS int AS $$ SELECT 1 $$ STRICT CALLED ON ...\n> ^\n>\n> The problem is it's actually quite hard to decide in each case whether\n> the option is redundant or conflicting. Sometimes, it might look\n> obvious in the code, but actually be much more subtle, due to an\n> earlier transformation of the grammar. Likewise redundant doesn't\n> necessarily mean literally specified more than once.\n>\n> Also, most of these don't have regression test cases, and I'm very\n> reluctant to change them without proper testing, and that would make\n> the patch much bigger. To me, this patch is already attempting to\n> change too much in one go, which is causing problems.\n>\n> So I suggest a more incremental approach, starting by keeping the\n> original error message, but improving it where possible with the error\n> position. Then maybe move on to look at specific cases that can be\n> further improved with additional detail (keeping the same primary\n> error message, for consistency).\n\nI'm fine with this approach as we do not have tests to cover all the\nerror conditions, also I'm not sure if it is worth adding tests for\nall the error conditions and as the patch changes a large number of\nerror conditions, an incremental approach is better.\n\n> Here is an updated version, following that approach. It does the following:\n>\n> 1). Keeps the same primary error message (\"conflicting or redundant\n> options\") in all cases.\n>\n> 2). Uses errorConflictingDefElem() to throw it, to ensure consistency\n> and reduce the executable size.\n>\n> 3). Includes your enhancements to make the ParseState available in\n> more places, so that the error position indicator is shown to indicate\n> the cause of the error.\n>\n> IMO, this makes for a much safer incremental change, that is more committable.\n>\n> As it turns out, there are 110 cases of this error that now use\n> errorConflictingDefElem(), and of those, just 10 (in 3 functions)\n> don't have a ParseState readily available to them:\n>\n> - ATExecSetIdentity()\n> - parse_output_parameters() x5\n> - parseCreateReplSlotOptions() x4\n>\n> It might be possible to improve those (and possibly some of the others\n> too) by adding some appropriate DETAIL to the error, but as I said, I\n> suggest doing that in a separate follow-on patch, and only with\n> careful analysis and testing of each case.\n>\n> As it stands, the improvements from (3) seem quite worthwhile. Also,\n> the patch saves a couple of hundred lines of code, and for me an\n> optimised executable is around 30 kB smaller, which is more than I\n> expected.\n\nAgreed, it can be handled as part of the 2nd patch. The changes you\nmade apply neatly and the test passes.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 13 Jul 2021 20:00:19 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Tue, 13 Jul 2021 at 15:30, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Jul 13, 2021 at 4:25 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> >\n> > As it stands, the improvements from (3) seem quite worthwhile. Also,\n> > the patch saves a couple of hundred lines of code, and for me an\n> > optimised executable is around 30 kB smaller, which is more than I\n> > expected.\n>\n> Agreed, it can be handled as part of the 2nd patch. The changes you\n> made apply neatly and the test passes.\n\nPushed.\n\nI noticed that it's actually safe to call parser_errposition() with a\nnull ParseState, so I simplified the ereport() code to just call it\nunconditionally. Also, I decided to not bother using the new function\nin cases with a null ParseState anyway since it doesn't provide any\nmeaningful benefit in those cases, and those are the cases most likely\nto targeted next, so it didn't seem sensible to change that code, only\nfor it to be changed again later.\n\nProbably the thing to think about next is the few remaining cases that\nthrow this error directly and don't have any errdetail or errhint to\nhelp the user identify the offending option. My preference remains to\nleave the primary error text unchanged, but just add some suitable\nerrdetail. Also, it's probably not worth adding a new function for\nthose remaining errors, since there are only a handful of them.\n\nRegards,\nDean\n\n\n", "msg_date": "Thu, 15 Jul 2021 09:10:19 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Thu, Jul 15, 2021 at 1:40 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Tue, 13 Jul 2021 at 15:30, vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Tue, Jul 13, 2021 at 4:25 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> > >\n> > > As it stands, the improvements from (3) seem quite worthwhile. Also,\n> > > the patch saves a couple of hundred lines of code, and for me an\n> > > optimised executable is around 30 kB smaller, which is more than I\n> > > expected.\n> >\n> > Agreed, it can be handled as part of the 2nd patch. The changes you\n> > made apply neatly and the test passes.\n>\n> Pushed.\n>\n> I noticed that it's actually safe to call parser_errposition() with a\n> null ParseState, so I simplified the ereport() code to just call it\n> unconditionally. Also, I decided to not bother using the new function\n> in cases with a null ParseState anyway since it doesn't provide any\n> meaningful benefit in those cases, and those are the cases most likely\n> to targeted next, so it didn't seem sensible to change that code, only\n> for it to be changed again later.\n>\n> Probably the thing to think about next is the few remaining cases that\n> throw this error directly and don't have any errdetail or errhint to\n> help the user identify the offending option. My preference remains to\n> leave the primary error text unchanged, but just add some suitable\n> errdetail. Also, it's probably not worth adding a new function for\n> those remaining errors, since there are only a handful of them.\n\nThanks for pushing this patch. I have changed the commitfest status to\n\"waiting for author\" till 0002 patch is posted.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 15 Jul 2021 17:12:39 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" }, { "msg_contents": "On Thu, Jul 15, 2021 at 5:12 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, Jul 15, 2021 at 1:40 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> >\n> > On Tue, 13 Jul 2021 at 15:30, vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Tue, Jul 13, 2021 at 4:25 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> > > >\n> > > > As it stands, the improvements from (3) seem quite worthwhile. Also,\n> > > > the patch saves a couple of hundred lines of code, and for me an\n> > > > optimised executable is around 30 kB smaller, which is more than I\n> > > > expected.\n> > >\n> > > Agreed, it can be handled as part of the 2nd patch. The changes you\n> > > made apply neatly and the test passes.\n> >\n> > Pushed.\n> >\n> > I noticed that it's actually safe to call parser_errposition() with a\n> > null ParseState, so I simplified the ereport() code to just call it\n> > unconditionally. Also, I decided to not bother using the new function\n> > in cases with a null ParseState anyway since it doesn't provide any\n> > meaningful benefit in those cases, and those are the cases most likely\n> > to targeted next, so it didn't seem sensible to change that code, only\n> > for it to be changed again later.\n> >\n> > Probably the thing to think about next is the few remaining cases that\n> > throw this error directly and don't have any errdetail or errhint to\n> > help the user identify the offending option. My preference remains to\n> > leave the primary error text unchanged, but just add some suitable\n> > errdetail. Also, it's probably not worth adding a new function for\n> > those remaining errors, since there are only a handful of them.\n>\n> Thanks for pushing this patch. I have changed the commitfest status to\n> \"waiting for author\" till 0002 patch is posted.\n\nI'm marking this entry in commitfest as committed, I'm planning to\nwork on the other comments later once I finish my current project\nworks.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 22 Jul 2021 11:39:23 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Enhanced error message to include hint messages for redundant\n options error" } ]
[ { "msg_contents": "-hackers,\n\nSo in doing some recent work on pg_stat_statements, I notice that while the\nregression test still passes on HEAD, it appears that 4f0b096 (per git\nbisect) changed/broke how this works compared to historical versions.\n\nEssentially, when doing a fresh install of pg_stat_statements on a new\nfresh db (outside of the regression framework), it's not returning any rows\nfrom the view. I didn't see any related documentation changes, so as far\nas I know, this should still be recording all statements as per normal.\n\nMy full steps to reproduce from a clean Centos 7 install are attached. I\nhave also been able to reproduce this on OS X and Fedora 33. The TL;DR is:\n\nCREATE EXTENSION pg_stat_statements;\nCREATE TABLE foo (a int, b text);\nINSERT INTO foo VALUES (1,'a');\nSELECT * FROM foo;\nSELECT * FROM pg_stat_statements; -- returns nothing\n\nSettings for pg_stat_statements:\npostgres=# select name, setting from pg_settings where name like\n'pg_stat_statements%';\n name | setting\n-----------------------------------+---------\n pg_stat_statements.max | 5000\n pg_stat_statements.save | on\n pg_stat_statements.track | top\n pg_stat_statements.track_planning | off\n pg_stat_statements.track_utility | on\n(5 rows)\n\nIs this an expected change, or is this in fact broken? In previous\nrevisions, this was showing the INSERT and SELECT at the very least. I'm\nunclear as to why the regression test is still passing, so want to verify\nthat I'm not doing something wrong in the testing.\n\nBest,\n\nDavid", "msg_date": "Mon, 26 Apr 2021 10:14:59 -0500", "msg_from": "David Christensen <david.christensen@crunchydata.com>", "msg_from_op": true, "msg_subject": "Issue in recent pg_stat_statements?" }, { "msg_contents": "On Mon, Apr 26, 2021 at 5:15 PM David Christensen\n<david.christensen@crunchydata.com> wrote:\n>\n> -hackers,\n>\n> So in doing some recent work on pg_stat_statements, I notice that while the regression test still passes on HEAD, it appears that 4f0b096 (per git bisect) changed/broke how this works compared to historical versions.\n>\n> Essentially, when doing a fresh install of pg_stat_statements on a new fresh db (outside of the regression framework), it's not returning any rows from the view. I didn't see any related documentation changes, so as far as I know, this should still be recording all statements as per normal.\n>\n> My full steps to reproduce from a clean Centos 7 install are attached. I have also been able to reproduce this on OS X and Fedora 33. The TL;DR is:\n>\n> CREATE EXTENSION pg_stat_statements;\n> CREATE TABLE foo (a int, b text);\n> INSERT INTO foo VALUES (1,'a');\n> SELECT * FROM foo;\n> SELECT * FROM pg_stat_statements; -- returns nothing\n>\n> Settings for pg_stat_statements:\n> postgres=# select name, setting from pg_settings where name like 'pg_stat_statements%';\n> name | setting\n> -----------------------------------+---------\n> pg_stat_statements.max | 5000\n> pg_stat_statements.save | on\n> pg_stat_statements.track | top\n> pg_stat_statements.track_planning | off\n> pg_stat_statements.track_utility | on\n> (5 rows)\n>\n> Is this an expected change, or is this in fact broken? In previous revisions, this was showing the INSERT and SELECT at the very least. I'm unclear as to why the regression test is still passing, so want to verify that I'm not doing something wrong in the testing.\n\nYes, you want to look into the queryid functionality. See\nhttps://www.postgresql.org/message-id/flat/35457b09-36f8-add3-1d07-6034fa585ca8%40oss.nttdata.com\n\nInterface changes may still be coming in 14 for that. Or warnings.\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Mon, 26 Apr 2021 17:18:09 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Issue in recent pg_stat_statements?" }, { "msg_contents": ">\n> > Is this an expected change, or is this in fact broken? In previous\n> revisions, this was showing the INSERT and SELECT at the very least. I'm\n> unclear as to why the regression test is still passing, so want to verify\n> that I'm not doing something wrong in the testing.\n>\n> Yes, you want to look into the queryid functionality. See\n>\n> https://www.postgresql.org/message-id/flat/35457b09-36f8-add3-1d07-6034fa585ca8%40oss.nttdata.com\n>\n> Interface changes may still be coming in 14 for that. Or warnings.\n>\n\nHmm, I'm unclear as to why you would potentially want to use\npg_stat_statements *without* this functionality. At the very least, it\nviolates POLA — I spent the better part of a day thinking this was a bug\ndue to the expected behavior being so obvious I wouldn't have expected any\ndifferent.\n\nIn any case, this discussion is better had on a different thread. Thanks\nat least for explaining what I was seeing.\n\nBest,\n\nDavid\n\n> Is this an expected change, or is this in fact broken?  In previous revisions, this was showing the INSERT and SELECT at the very least.  I'm unclear as to why the regression test is still passing, so want to verify that I'm not doing something wrong in the testing.\n\nYes, you want to look into the queryid functionality. See\nhttps://www.postgresql.org/message-id/flat/35457b09-36f8-add3-1d07-6034fa585ca8%40oss.nttdata.com\n\nInterface changes may still be coming in 14 for that. Or warnings.Hmm, I'm unclear as to why you would potentially want to use pg_stat_statements *without* this functionality.  At the very least, it violates POLA — I spent the better part of a day thinking this was a bug due to the expected behavior being so obvious I wouldn't have expected any different.In any case, this discussion is better had on a different thread.  Thanks at least for explaining what I was seeing.Best,David", "msg_date": "Mon, 26 Apr 2021 10:40:21 -0500", "msg_from": "David Christensen <david.christensen@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: Issue in recent pg_stat_statements?" }, { "msg_contents": "On Mon, Apr 26, 2021 at 11:40 PM David Christensen\n<david.christensen@crunchydata.com> wrote:\n>>\n>> > Is this an expected change, or is this in fact broken? In previous revisions, this was showing the INSERT and SELECT at the very least. I'm unclear as to why the regression test is still passing, so want to verify that I'm not doing something wrong in the testing.\n>>\n>> Yes, you want to look into the queryid functionality. See\n>> https://www.postgresql.org/message-id/flat/35457b09-36f8-add3-1d07-6034fa585ca8%40oss.nttdata.com\n>>\n>> Interface changes may still be coming in 14 for that. Or warnings.\n>\n>\n> Hmm, I'm unclear as to why you would potentially want to use pg_stat_statements *without* this functionality.\n\nUsing pg_stat_statements with a different query_id semantics without\nhaving to fork pg_stat_statements.\n\n\n", "msg_date": "Tue, 27 Apr 2021 01:19:25 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Issue in recent pg_stat_statements?" }, { "msg_contents": "On Mon, Apr 26, 2021 at 12:18 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Mon, Apr 26, 2021 at 11:40 PM David Christensen\n> <david.christensen@crunchydata.com> wrote:\n> >>\n> >> > Is this an expected change, or is this in fact broken? In previous\n> revisions, this was showing the INSERT and SELECT at the very least. I'm\n> unclear as to why the regression test is still passing, so want to verify\n> that I'm not doing something wrong in the testing.\n> >>\n> >> Yes, you want to look into the queryid functionality. See\n> >>\n> https://www.postgresql.org/message-id/flat/35457b09-36f8-add3-1d07-6034fa585ca8%40oss.nttdata.com\n> >>\n> >> Interface changes may still be coming in 14 for that. Or warnings.\n> >\n> >\n> > Hmm, I'm unclear as to why you would potentially want to use\n> pg_stat_statements *without* this functionality.\n>\n> Using pg_stat_statements with a different query_id semantics without\n> having to fork pg_stat_statements.\n>\n\nI can see that argument for allowing alternatives, but the current default\nof nothing seems to be particularly non-useful, so some sensible default\nvalue would seem to be in order, or I can predict a whole mess of future\nuser complaints.\n\nOn Mon, Apr 26, 2021 at 12:18 PM Julien Rouhaud <rjuju123@gmail.com> wrote:On Mon, Apr 26, 2021 at 11:40 PM David Christensen\n<david.christensen@crunchydata.com> wrote:\n>>\n>> > Is this an expected change, or is this in fact broken?  In previous revisions, this was showing the INSERT and SELECT at the very least.  I'm unclear as to why the regression test is still passing, so want to verify that I'm not doing something wrong in the testing.\n>>\n>> Yes, you want to look into the queryid functionality. See\n>> https://www.postgresql.org/message-id/flat/35457b09-36f8-add3-1d07-6034fa585ca8%40oss.nttdata.com\n>>\n>> Interface changes may still be coming in 14 for that. Or warnings.\n>\n>\n> Hmm, I'm unclear as to why you would potentially want to use pg_stat_statements *without* this functionality.\n\nUsing pg_stat_statements with a different query_id semantics without\nhaving to fork pg_stat_statements.I can see that argument for allowing alternatives, but the current default of nothing seems to be particularly non-useful, so some sensible default value would seem to be in order, or I can predict a whole mess of future user complaints.", "msg_date": "Mon, 26 Apr 2021 12:53:30 -0500", "msg_from": "David Christensen <david.christensen@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: Issue in recent pg_stat_statements?" }, { "msg_contents": "On 2021-04-26 12:53:30 -0500, David Christensen wrote:\n> On Mon, Apr 26, 2021 at 12:18 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > Using pg_stat_statements with a different query_id semantics without\n> > having to fork pg_stat_statements.\n> >\n> \n> I can see that argument for allowing alternatives, but the current default\n> of nothing seems to be particularly non-useful, so some sensible default\n> value would seem to be in order, or I can predict a whole mess of future\n> user complaints.\n\n+1\n\n\n", "msg_date": "Mon, 26 Apr 2021 11:08:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Issue in recent pg_stat_statements?" }, { "msg_contents": "Andres Freund writes:\n\n> On 2021-04-26 12:53:30 -0500, David Christensen wrote:\n>> On Mon, Apr 26, 2021 at 12:18 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>> > Using pg_stat_statements with a different query_id semantics without\n>> > having to fork pg_stat_statements.\n>> >\n>> \n>> I can see that argument for allowing alternatives, but the current default\n>> of nothing seems to be particularly non-useful, so some sensible default\n>> value would seem to be in order, or I can predict a whole mess of future\n>> user complaints.\n>\n> +1\n\nJust doing some routine followup here; it looks like cafde58b33 fixes\nthis issue.\n\nThanks!\n\nDavid\n\n\n", "msg_date": "Tue, 29 Jun 2021 12:25:57 -0500", "msg_from": "David Christensen <david.christensen@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: Issue in recent pg_stat_statements?" } ]
[ { "msg_contents": "Hi\n\nI tried to write a query that does lateral join between\ninformation_schema.tables and pgstattuple function.\n\nselect * from information_schema.tables, lateral(select * from\npgstattuple(table_name::name)) s where table_type = 'BASE TABLE';\n\nThe query finished by strange error\n\npostgres=# select * from information_schema.tables, lateral(select * from\npgstattuple(table_name::name)) s where table_type = 'BASE TABLE';\nERROR: relation \"sql_features\" does not exist\n\nWhen I set search_path to information_schema, then the query is running.\nBut there is not any reason why it should be necessary.\n\nI found this issue on pg 11.11, but the same behavior is on master branch.\n\nRegards\n\nPavel\n\nHiI tried to write a query that does lateral join between information_schema.tables and pgstattuple function.select * from information_schema.tables, lateral(select * from pgstattuple(table_name::name)) s where table_type = 'BASE TABLE';The query finished by strange errorpostgres=# select * from information_schema.tables, lateral(select * from pgstattuple(table_name::name)) s where table_type = 'BASE TABLE';ERROR:  relation \"sql_features\" does not existWhen I set search_path to information_schema, then the query is running. But there is not any reason why it should be necessary.I found this issue on pg 11.11, but the same behavior is on master branch.RegardsPavel", "msg_date": "Mon, 26 Apr 2021 18:57:17 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "ERROR: relation \"sql_features\" does not exist" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I tried to write a query that does lateral join between\n> information_schema.tables and pgstattuple function.\n\n> select * from information_schema.tables, lateral(select * from\n> pgstattuple(table_name::name)) s where table_type = 'BASE TABLE';\n\n> The query finished by strange error\n\n> postgres=# select * from information_schema.tables, lateral(select * from\n> pgstattuple(table_name::name)) s where table_type = 'BASE TABLE';\n> ERROR: relation \"sql_features\" does not exist\n\n> When I set search_path to information_schema, then the query is running.\n> But there is not any reason why it should be necessary.\n\nNope, this is classic user error, nothing else. \"table_name::name\"\nis entirely inadequate as a way to reference a table that isn't\nvisible in your search path. You have to incorporate the schema\nname as well.\n\nIdeally you'd just pass the table OID to the OID-accepting version of\npgstattuple(), but of course the information_schema schema views\ndon't expose OIDs. So basically you need something like\n\npgstattuple((quote_ident(table_schema)||'.'||quote_ident(table_name))::regclass)\n\nalthough perhaps format() could help a little here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Apr 2021 13:10:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ERROR: relation \"sql_features\" does not exist" }, { "msg_contents": "po 26. 4. 2021 v 19:10 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > I tried to write a query that does lateral join between\n> > information_schema.tables and pgstattuple function.\n>\n> > select * from information_schema.tables, lateral(select * from\n> > pgstattuple(table_name::name)) s where table_type = 'BASE TABLE';\n>\n> > The query finished by strange error\n>\n> > postgres=# select * from information_schema.tables, lateral(select * from\n> > pgstattuple(table_name::name)) s where table_type = 'BASE TABLE';\n> > ERROR: relation \"sql_features\" does not exist\n>\n> > When I set search_path to information_schema, then the query is running.\n> > But there is not any reason why it should be necessary.\n>\n> Nope, this is classic user error, nothing else. \"table_name::name\"\n> is entirely inadequate as a way to reference a table that isn't\n> visible in your search path. You have to incorporate the schema\n> name as well.\n>\n> Ideally you'd just pass the table OID to the OID-accepting version of\n> pgstattuple(), but of course the information_schema schema views\n> don't expose OIDs. So basically you need something like\n>\n>\n> pgstattuple((quote_ident(table_schema)||'.'||quote_ident(table_name))::regclass)\n>\n> although perhaps format() could help a little here.\n>\n\nI understand now. Thank you for explanation\n\nselect * from information_schema.tables, lateral(select * from\npgstattuple(format('%I.%I', table_schema, table_name))) s where table_type\n= 'BASE TABLE';\n\nThis is working\n\nRegards\n\nPavel\n\n>\n> regards, tom lane\n>\n\npo 26. 4. 2021 v 19:10 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I tried to write a query that does lateral join between\n> information_schema.tables and pgstattuple function.\n\n> select * from information_schema.tables, lateral(select * from\n> pgstattuple(table_name::name)) s where table_type = 'BASE TABLE';\n\n> The query finished by strange error\n\n> postgres=# select * from information_schema.tables, lateral(select * from\n> pgstattuple(table_name::name)) s where table_type = 'BASE TABLE';\n> ERROR:  relation \"sql_features\" does not exist\n\n> When I set search_path to information_schema, then the query is running.\n> But there is not any reason why it should be necessary.\n\nNope, this is classic user error, nothing else.  \"table_name::name\"\nis entirely inadequate as a way to reference a table that isn't\nvisible in your search path.  You have to incorporate the schema\nname as well.\n\nIdeally you'd just pass the table OID to the OID-accepting version of\npgstattuple(), but of course the information_schema schema views\ndon't expose OIDs.  So basically you need something like\n\npgstattuple((quote_ident(table_schema)||'.'||quote_ident(table_name))::regclass)\n\nalthough perhaps format() could help a little here.I understand now. Thank you for explanationselect * from information_schema.tables, lateral(select * from pgstattuple(format('%I.%I', table_schema, table_name))) s where table_type = 'BASE TABLE';This is workingRegardsPavel \n\n                        regards, tom lane", "msg_date": "Mon, 26 Apr 2021 19:14:50 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ERROR: relation \"sql_features\" does not exist" } ]
[ { "msg_contents": "\nJust to complete the circle on this topic, which I intend to take up\nagain during the next dev cycle, I have captured the current state of my\nwork in a public git repo at\n<https://gitlab.com/adunstan/postgresnodeng>. This can be cloned and\nused without having to change the core Postgres code, as shown in the\nREADME.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 26 Apr 2021 15:31:09 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "multi-version capable PostgresNode.pm" } ]
[ { "msg_contents": "Hi,\n\nA lot of the APIs in PostgreSQL that accept a callback follow the\nfamiliar idiom of an extra void* argument allowing a single static\ncallback address to be multiplexed. But not all of them do. For example,\nif I wanted to expose the possibility of defining GUCs from code written\nin my PL, I would run into the fact that none of GUC's check/assign/show\nhooks have the void* extra arg.\n\n(The GUC machinery allows a way for extra info to be passed from a check\nhook to an assign hook, which for a moment I thought might be abusable to\nmultiplex hooks, but I don't believe it is.)\n\nMaking all such APIs follow the void *extra convention might have\nthe virtue of consistency, but that might not be worth disturbing APIs\nthat have been stable for many years, and an effort to do so\nmight not be guaranteed to catch every such instance anyway.\n\nA more general solution might be a function that generates a callback\nstub: \"please give me a void (*foo)() at a distinct address that I can\npass into this API, and when called it will call bar(baz), passing\nthis value for baz.\"\n\nIn olden days I wasn't above writing C to just slam those instructions\non the stack and return their address, but that was without multiple\narchitectures to think about, and non-executable stacks, and so on.\nNow, there'd be a bit more magic required. Maybe some such ability is\nalready present in LLVM and could be exposed in jit.c?\n\nI see that Java is currently incubating such a feature [0], so if I wait\nfor that I will have another option that serves my specific purposes, but\nI wonder if it would be useful for PostgreSQL itself to have such\na capability available (or if it already does, and I haven't found it).\n\nRegards,\n-Chap\n\n\n\n[0]\nhttps://docs.oracle.com/en/java/javase/16/docs/api/jdk.incubator.foreign/jdk/incubator/foreign/CLinker.html#upcallStub(java.lang.invoke.MethodHandle,jdk.incubator.foreign.FunctionDescriptor)\n\n\n", "msg_date": "Mon, 26 Apr 2021 20:13:19 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Do we have a way to dynamically make a callback stub?" } ]
[ { "msg_contents": "Folks,\n\nI noticed that $subject completes with already valid constraints,\nplease find attached a patch that fixes it. I noticed that there are\nother places constraints can be validated, but didn't check whether\nsimilar bugs exist there yet.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate", "msg_date": "Tue, 27 Apr 2021 00:24:34 +0000", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": true, "msg_subject": "Bug fix for tab completion of ALTER TABLE ... VALIDATE CONSTRAINT\n ..." }, { "msg_contents": "Hi David,\n\n> I noticed that $subject completes with already valid constraints,\n> please find attached a patch that fixes it. I noticed that there are\n> other places constraints can be validated, but didn't check whether\n> similar bugs exist there yet.\n\nThere was a typo in the patch (\"... and and not convalidated\"). I've fixed\nit. Otherwise the patch passes all the tests and works as expected:\n\neax=# create table foo (x int);\nCREATE TABLE\neax=# alter table foo add constraint bar check (x < 3) not valid;\nALTER TABLE\neax=# alter table foo add constraint baz check (x <> 5) not valid;\nALTER TABLE\neax=# alter table foo validate constraint ba\nbar baz\neax=# alter table foo validate constraint bar;\nALTER TABLE\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Tue, 27 Apr 2021 12:33:25 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Bug fix for tab completion of ALTER TABLE ... VALIDATE CONSTRAINT\n ..." }, { "msg_contents": "Hi hackers,\n\n> Otherwise the patch passes all the tests and works as expected\n\nI've noticed there is no tab completion for ALTER TABLE xxx ADD. Here\nis an alternative version of the patch that fixes this as well. Not\nsure if this should be in the same commit though.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Tue, 27 Apr 2021 12:58:52 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Bug fix for tab completion of ALTER TABLE ... VALIDATE CONSTRAINT\n ..." }, { "msg_contents": "On Tue, Apr 27, 2021 at 12:33:25PM +0300, Aleksander Alekseev wrote:\n> Hi David,\n> \n> > I noticed that $subject completes with already valid constraints,\n> > please find attached a patch that fixes it. I noticed that there are\n> > other places constraints can be validated, but didn't check whether\n> > similar bugs exist there yet.\n> \n> There was a typo in the patch (\"... and and not convalidated\"). I've fixed\n> it. Otherwise the patch passes all the tests and works as expected:\n> \n> eax=# create table foo (x int);\n> CREATE TABLE\n> eax=# alter table foo add constraint bar check (x < 3) not valid;\n> ALTER TABLE\n> eax=# alter table foo add constraint baz check (x <> 5) not valid;\n> ALTER TABLE\n> eax=# alter table foo validate constraint ba\n> bar baz\n> eax=# alter table foo validate constraint bar;\n> ALTER TABLE\n\nSorry about that typo, and thanks for poking at this!\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Wed, 28 Apr 2021 02:53:08 +0000", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": true, "msg_subject": "Re: Bug fix for tab completion of ALTER TABLE ... VALIDATE\n CONSTRAINT ..." }, { "msg_contents": "On Tue, Apr 27, 2021 at 12:58:52PM +0300, Aleksander Alekseev wrote:\n> I've noticed there is no tab completion for ALTER TABLE xxx ADD. Here\n> is an alternative version of the patch that fixes this as well. Not\n> sure if this should be in the same commit though.\n\n- /* If we have ALTER TABLE <sth> DROP, provide COLUMN or CONSTRAINT */\n- else if (Matches(\"ALTER\", \"TABLE\", MatchAny, \"DROP\"))\n+ /* If we have ALTER TABLE <sth> ADD|DROP, provide COLUMN or CONSTRAINT */\n+ else if (Matches(\"ALTER\", \"TABLE\", MatchAny, \"ADD|DROP\"))\nSeems to me that the behavior to not complete with COLUMN or\nCONSTRAINT for ADD is intentional, as it is possible to specify a\nconstraint or column name without the object type first. This\nintroduces a inconsistent behavior with what we do for columns with\nADD, for one. So a more consistent approach would be to list columns,\nconstraints, COLUMN and CONSTRAINT in the list of options available\nafter ADD.\n\n+ else if (Matches(\"ALTER\", \"TABLE\", MatchAny, \"VALIDATE\", \"CONSTRAINT\"))\n+ {\n+ completion_info_charp = prev3_wd;\n+ COMPLETE_WITH_QUERY(Query_for_nonvalid_constraint_of_table);\n+ }\nSpecifying valid constraints is an authorized grammar, so it does not\nseem that bad to keep things as they are, either. I would leave that\nalone.\n--\nMichael", "msg_date": "Wed, 19 May 2021 16:53:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Bug fix for tab completion of ALTER TABLE ... VALIDATE\n CONSTRAINT ..." }, { "msg_contents": "> On 19 May 2021, at 09:53, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Apr 27, 2021 at 12:58:52PM +0300, Aleksander Alekseev wrote:\n>> I've noticed there is no tab completion for ALTER TABLE xxx ADD. Here\n>> is an alternative version of the patch that fixes this as well. Not\n>> sure if this should be in the same commit though.\n> \n> - /* If we have ALTER TABLE <sth> DROP, provide COLUMN or CONSTRAINT */\n> - else if (Matches(\"ALTER\", \"TABLE\", MatchAny, \"DROP\"))\n> + /* If we have ALTER TABLE <sth> ADD|DROP, provide COLUMN or CONSTRAINT */\n> + else if (Matches(\"ALTER\", \"TABLE\", MatchAny, \"ADD|DROP\"))\n> Seems to me that the behavior to not complete with COLUMN or\n> CONSTRAINT for ADD is intentional, as it is possible to specify a\n> constraint or column name without the object type first. This\n> introduces a inconsistent behavior with what we do for columns with\n> ADD, for one. So a more consistent approach would be to list columns,\n> constraints, COLUMN and CONSTRAINT in the list of options available\n> after ADD.\n> \n> + else if (Matches(\"ALTER\", \"TABLE\", MatchAny, \"VALIDATE\", \"CONSTRAINT\"))\n> + {\n> + completion_info_charp = prev3_wd;\n> + COMPLETE_WITH_QUERY(Query_for_nonvalid_constraint_of_table);\n> + }\n> Specifying valid constraints is an authorized grammar, so it does not\n> seem that bad to keep things as they are, either. I would leave that\n> alone.\n\nThis has stalled being marked Waiting on Author since May, and reading the\nabove it sounds like marking it Returned with Feedback is the logical next step\n(patch also no longer applies).\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 3 Sep 2021 20:27:55 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Bug fix for tab completion of ALTER TABLE ... VALIDATE CONSTRAINT\n ..." }, { "msg_contents": "On Fri, Sep 03, 2021 at 08:27:55PM +0200, Daniel Gustafsson wrote:\n> > On 19 May 2021, at 09:53, Michael Paquier <michael@paquier.xyz> wrote:\n> > \n> > On Tue, Apr 27, 2021 at 12:58:52PM +0300, Aleksander Alekseev wrote:\n> >> I've noticed there is no tab completion for ALTER TABLE xxx ADD. Here\n> >> is an alternative version of the patch that fixes this as well. Not\n> >> sure if this should be in the same commit though.\n> > \n> > - /* If we have ALTER TABLE <sth> DROP, provide COLUMN or CONSTRAINT */\n> > - else if (Matches(\"ALTER\", \"TABLE\", MatchAny, \"DROP\"))\n> > + /* If we have ALTER TABLE <sth> ADD|DROP, provide COLUMN or CONSTRAINT */\n> > + else if (Matches(\"ALTER\", \"TABLE\", MatchAny, \"ADD|DROP\"))\n> > Seems to me that the behavior to not complete with COLUMN or\n> > CONSTRAINT for ADD is intentional, as it is possible to specify a\n> > constraint or column name without the object type first. This\n> > introduces a inconsistent behavior with what we do for columns with\n> > ADD, for one. So a more consistent approach would be to list columns,\n> > constraints, COLUMN and CONSTRAINT in the list of options available\n> > after ADD.\n> > \n> > + else if (Matches(\"ALTER\", \"TABLE\", MatchAny, \"VALIDATE\", \"CONSTRAINT\"))\n> > + {\n> > + completion_info_charp = prev3_wd;\n> > + COMPLETE_WITH_QUERY(Query_for_nonvalid_constraint_of_table);\n> > + }\n> > Specifying valid constraints is an authorized grammar, so it does not\n> > seem that bad to keep things as they are, either. I would leave that\n> > alone.\n> \n> This has stalled being marked Waiting on Author since May, and reading the\n> above it sounds like marking it Returned with Feedback is the logical next step\n> (patch also no longer applies).\n\nPlease find attached the next revision :)\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate", "msg_date": "Wed, 15 Sep 2021 06:06:04 +0000", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": true, "msg_subject": "Re: Bug fix for tab completion of ALTER TABLE ... VALIDATE\n CONSTRAINT ..." }, { "msg_contents": "Hi David,\n\n> Please find attached the next revision :)\n\nThe patch didn't apply and couldn't pass cfbot [1]. The (hopefully)\ncorrected patch is attached. Other than that it looks OK to me but let's\nsee what cfbot will tell.\n\n[1]: http://cfbot.cputube.org/patch_34_3113.log\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Fri, 24 Sep 2021 15:35:43 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Bug fix for tab completion of ALTER TABLE ... VALIDATE CONSTRAINT\n ..." }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nThe cfbot seems to be happy with the updated patch.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Fri, 24 Sep 2021 13:51:25 +0000", "msg_from": "Aleksander Alekseev <afiskon@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bug fix for tab completion of ALTER TABLE ... VALIDATE CONSTRAINT\n ..." }, { "msg_contents": "Aleksander Alekseev <afiskon@gmail.com> writes:\n> The cfbot seems to be happy with the updated patch.\n> The new status of this patch is: Ready for Committer\n\nPushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 Jan 2022 18:15:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug fix for tab completion of ALTER TABLE ... VALIDATE CONSTRAINT\n ..." } ]
[ { "msg_contents": "Fork is an expensive operation[1]. The major cost is the mm(VMA PTE...) copy.\n\nARM is especially weak on fork, which will invalid TLB entries one by one, and this is an expensive operation[2]. We could easily got 100% CPU on ARM machine. We also meet fork problem in x86, but not as serious as arm.\n\nWe can avoid this by enable hugepage(and 2MB doesn’t help us under arm, we got a huge shared buffer), but we still think it is a problem.\n\nSo I propose to remove shared buffers from postmaster and shmat them after fork. Not all of them, we still keep necessary shared memories in postmaster. Or maybe we just need to give up fork like under Windows?\n\nAny good idea about it?\n\n[1]. https://www.microsoft.com/en-us/research/publication/a-fork-in-the-road/\n[2]. https://developer.arm.com/documentation/ddi0487/latest/\nD5.10 TLB maintenance requirements and the TLB maintenance instructions:\nBreak-before-make sequence on changing from an old translation table entry to a new translation table entryrequires the following steps:\n1. Replace the old translation table entry with an invalid entry, and execute a DSB instruction.\n2. Invalidate the translation table entry with a broadcast TLB invalidation instruction, and execute a DSBinstruction to ensure the completion of that invalidation.\n3. Write the new translation table entry, and execute a DSB instruction to ensure that the new entry is visible.\n\nRegards.\nYuhang Qiu.\nFork is an expensive operation[1]. The major cost is the mm(VMA PTE...) copy.ARM is especially weak on fork, which will invalid TLB entries one by one, and this is an expensive operation[2]. We could easily got 100% CPU on ARM machine. We also meet fork problem in x86, but not as serious as arm.We can avoid this by enable hugepage(and 2MB doesn’t help us under arm, we got a huge shared buffer), but we still think it is a problem.So I propose to remove shared buffers from postmaster and shmat them after fork. Not all of them, we still keep necessary shared memories in postmaster. Or maybe we just need to give up fork like under Windows?Any good idea about it?[1]. https://www.microsoft.com/en-us/research/publication/a-fork-in-the-road/[2]. https://developer.arm.com/documentation/ddi0487/latest/D5.10 TLB maintenance requirements and the TLB maintenance instructions:Break-before-make sequence on changing from an old translation table entry to a new translation table entryrequires the following steps:1. Replace the old translation table entry with an invalid entry, and execute a DSB instruction.2. Invalidate the translation table entry with a broadcast TLB invalidation instruction, and execute a DSBinstruction to ensure the completion of that invalidation.3. Write the new translation table entry, and execute a DSB instruction to ensure that the new entry is visible.Regards.Yuhang Qiu.", "msg_date": "Tue, 27 Apr 2021 11:56:06 +0800", "msg_from": "\"=?UTF-8?B?6YKx5a6H6IiqKOeDm+i/nCk=?=\" <yuhang.qyh@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "=?UTF-8?B?QXR0YWNoIHRvIHNoYXJlZCBtZW1vcnkgYWZ0ZXIgZm9yaygp?=" }, { "msg_contents": "\nOn 4/26/21 11:56 PM, 邱宇航(烛远) wrote:\n> Fork is an expensive operation[1]. The major cost is the mm(VMA\n> PTE...) copy.\n>\n> ARM is especially weak on fork, which will invalid TLB entries one by\n> one, and this is an expensive operation[2]. We could easily got 100%\n> CPU on ARM machine. We also meet fork problem in x86, but not as\n> serious as arm.\n>\n> We can avoid this by enable hugepage(and 2MB doesn’t help us under\n> arm, we got a huge shared buffer), but we still think it is a problem.\n>\n> So I propose to remove shared buffers from postmaster and shmat them\n> after fork. Not all of them, we still keep necessary shared memories\n> in postmaster. Or maybe we just need to give up fork like under Windows?\n>\n\nWindows has CreateProcess, which isn't available elsewhere. If you build\nwith EXEC_BACKEND on *nix it will fork() followed by exec(), the classic\nUnix pattern. You can benchmark that but I doubt you will like the results.\n\nThis is one of the reasons for using a connection pooler like pgbouncer,\nwhich can vastly reduce the number of new process creations Postgres has\nto do.\n\nBetter shared memory management might be more promising.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 27 Apr 2021 07:23:16 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Attach to shared memory after fork()" }, { "msg_contents": "\"=?UTF-8?B?6YKx5a6H6IiqKOeDm+i/nCk=?=\" <yuhang.qyh@alibaba-inc.com> writes:\n> Fork is an expensive operation[1].\n\nYeah, it's not hugely cheap.\n\n> So I propose to remove shared buffers from postmaster and shmat them\n> after fork.\n\nThis proposal seems moderately insane. In the first place, it\nintroduces failure modes we could do without, and in the second place,\nhow is it not strictly *more* expensive than what happens now? You\nstill have to end up with all those TLB entries mapped in the child.\n\n(If your kernel is unable to pass down shared-memory TLBs effectively,\nISTM that's a kernel shortcoming not a Postgres architectural problem.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 27 Apr 2021 09:51:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: =?UTF-8?B?QXR0YWNoIHRvIHNoYXJlZCBtZW1vcnkgYWZ0ZXIgZm9yaygp?=" }, { "msg_contents": "> Windows has CreateProcess, which isn't available elsewhere.\nYes, we still need fork() on *nix. So the solution is to reduce the\noverhead of fork(). Attach to shared memory after fork() might be a\n\"Better shared memory management\".\n\n> This is one of the reasons for using a connection pooler like pgbouncer,\n> which can vastly reduce the number of new process creations Postgres has\nto do.\nYes, it’s another way I forgot to mention. But I think there should be a\ncleaner way without other component.\n\n> This proposal seems moderately insane. In the first place, it\n> introduces failure modes we could do without, and in the second place,\n> how is it not strictly *more* expensive than what happens now? You\n> still have to end up with all those TLB entries mapped in the child.\nYes, the idea is radical. But it’s practical.\n1. I don’t quite catch that. Can you explain it?\n2. Yes, the overall cost is still the same, but the cost can spread\ninto multi processes thus CPUs, not 100% on Postmaster.\n\n> (If your kernel is unable to pass down shared-memory TLBs effectively,\n> ISTM that's a kernel shortcoming not a Postgres architectural problem.)\nIndeed, it’s a kernel/CPUarch shortcoming. But it is also a Postgres\narchitectural problem. MySQL and Oracle have no such problem.\nIMHO Postgres should manage itself well(eg. IO/buffer pool/connection/...)\nand not rely so much on OS kernel...\n\nThe fork() used to be a genius hack, but now it’s a burden and it will get\nworse and worse. All I want to do is remove fork() or reduce the overhead.\nMaybe *nux will have CreateProcess someday(and I think it will), and we\nshould wait for it?\n\n\n> Windows has CreateProcess, which isn't available elsewhere.Yes, we still need fork() on *nix. So the solution is to reduce theoverhead of fork(). Attach to shared memory after fork() might be a\"Better shared memory management\".> This is one of the reasons for using a connection pooler like pgbouncer,> which can vastly reduce the number of new process creations Postgres hasto do.Yes, it’s another way I forgot to mention. But I think there should be acleaner way without other component.> This proposal seems moderately insane.  In the first place, it> introduces failure modes we could do without, and in the second place,> how is it not strictly *more* expensive than what happens now?  You> still have to end up with all those TLB entries mapped in the child.Yes, the idea is radical. But it’s practical.1. I don’t quite catch that. Can you explain it?2. Yes, the overall cost is still the same, but the cost can spreadinto multi processes thus CPUs, not 100% on Postmaster.> (If your kernel is unable to pass down shared-memory TLBs effectively,> ISTM that's a kernel shortcoming not a Postgres architectural problem.)Indeed, it’s a kernel/CPUarch shortcoming. But it is also a Postgresarchitectural problem. MySQL and Oracle have no such problem.IMHO Postgres should manage itself well(eg. IO/buffer pool/connection/...)and not rely so much on OS kernel...The fork() used to be a genius hack, but now it’s a burden and it will getworse and worse. All I want to do is remove fork() or reduce the overhead.Maybe *nux will have CreateProcess someday(and I think it will), and weshould wait for it?", "msg_date": "Wed, 28 Apr 2021 16:52:23 +0800", "msg_from": "\"=?UTF-8?B?6YKx5a6H6IiqKOeDm+i/nCk=?=\" <yuhang.qyh@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?5Zue5aSN77yaQXR0YWNoIHRvIHNoYXJlZCBtZW1vcnkgYWZ0ZXIgZm9yaygp?=" } ]
[ { "msg_contents": "Hi,\n\nTemporary tables usually gets a unique schema name, see this:\n\npostgres=# create temp table foo(i int);\nCREATE TABLE\npostgres=# explain verbose select * from foo;\n QUERY PLAN\n-----------------------------------------------------------------\n Seq Scan on pg_temp_3.foo (cost=0.00..35.50 rows=2550 width=4)\n Output: i\n(2 rows)\n\nThe problem is that explain-verbose regression test output becomes\nunstable when several concurrently running tests operate on temporary\ntables.\n\nI was wondering can we simply skip the temporary schema name from the\nexplain-verbose output or place the \"pg_temp\" schema name?\n\nThoughts/Suggestions?\n\nRegares,\nAmul\n\n\n", "msg_date": "Tue, 27 Apr 2021 10:50:22 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Skip temporary table schema name from explain-verbose output." }, { "msg_contents": "On Tue, Apr 27, 2021 at 10:51 AM Amul Sul <sulamul@gmail.com> wrote:\n>\n> Hi,\n>\n> Temporary tables usually gets a unique schema name, see this:\n>\n> postgres=# create temp table foo(i int);\n> CREATE TABLE\n> postgres=# explain verbose select * from foo;\n> QUERY PLAN\n> -----------------------------------------------------------------\n> Seq Scan on pg_temp_3.foo (cost=0.00..35.50 rows=2550 width=4)\n> Output: i\n> (2 rows)\n>\n> The problem is that explain-verbose regression test output becomes\n> unstable when several concurrently running tests operate on temporary\n> tables.\n>\n> I was wondering can we simply skip the temporary schema name from the\n> explain-verbose output or place the \"pg_temp\" schema name?\n>\n> Thoughts/Suggestions?\n\nHow about using an explain filter to replace the unstable text\npg_temp_3 to pg_temp_N instead of changing it in the core? Following\nare the existing explain filters: explain_filter,\nexplain_parallel_append, explain_analyze_without_memory,\nexplain_resultcache, explain_parallel_sort_stats, explain_sq_limit.\n\nLooks like some of the test cases already replace pg_temp_nn with pg_temp:\n-- \\dx+ would expose a variable pg_temp_nn schema name, so we can't use it here\nselect regexp_replace(pg_describe_object(classid, objid, objsubid),\n 'pg_temp_\\d+', 'pg_temp', 'g') as \"Object description\"\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Apr 2021 11:07:18 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Skip temporary table schema name from explain-verbose output." }, { "msg_contents": "On Tue, Apr 27, 2021 at 11:07 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Apr 27, 2021 at 10:51 AM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > Temporary tables usually gets a unique schema name, see this:\n> >\n> > postgres=# create temp table foo(i int);\n> > CREATE TABLE\n> > postgres=# explain verbose select * from foo;\n> > QUERY PLAN\n> > -----------------------------------------------------------------\n> > Seq Scan on pg_temp_3.foo (cost=0.00..35.50 rows=2550 width=4)\n> > Output: i\n> > (2 rows)\n> >\n> > The problem is that explain-verbose regression test output becomes\n> > unstable when several concurrently running tests operate on temporary\n> > tables.\n> >\n> > I was wondering can we simply skip the temporary schema name from the\n> > explain-verbose output or place the \"pg_temp\" schema name?\n> >\n> > Thoughts/Suggestions?\n>\n> How about using an explain filter to replace the unstable text\n> pg_temp_3 to pg_temp_N instead of changing it in the core? Following\n> are the existing explain filters: explain_filter,\n> explain_parallel_append, explain_analyze_without_memory,\n> explain_resultcache, explain_parallel_sort_stats, explain_sq_limit.\n>\n\nWell, yes eventually, that will be the kludge. I was wondering if that\ntable is accessible in a query via pg_temp schema then why should\nbother about printing the pg_temp_N schema name which is an internal\npurpose.\n\n> Looks like some of the test cases already replace pg_temp_nn with pg_temp:\n> -- \\dx+ would expose a variable pg_temp_nn schema name, so we can't use it here\n> select regexp_replace(pg_describe_object(classid, objid, objsubid),\n> 'pg_temp_\\d+', 'pg_temp', 'g') as \"Object description\"\n>\n\nThis \\d could be one example of why not simply show pg_temp instead of\npg_temp_N.\n\nRegards,\nAmul\n\n\n", "msg_date": "Tue, 27 Apr 2021 12:22:20 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Skip temporary table schema name from explain-verbose output." }, { "msg_contents": "On Tue, Apr 27, 2021 at 12:23 PM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > How about using an explain filter to replace the unstable text\n> > pg_temp_3 to pg_temp_N instead of changing it in the core? Following\n> > are the existing explain filters: explain_filter,\n> > explain_parallel_append, explain_analyze_without_memory,\n> > explain_resultcache, explain_parallel_sort_stats, explain_sq_limit.\n> >\n>\n> Well, yes eventually, that will be the kludge. I was wondering if that\n> table is accessible in a query via pg_temp schema then why should\n> bother about printing the pg_temp_N schema name which is an internal\n> purpose.\n\nAlthough only the associated session can access objects from that\nschema, I think, the entries in pg_class have different namespace oids\nand are accessible from other sessions. So knowing the actual schema\nname is useful for debugging purposes. Using auto_explain, the explain\noutput goes to server log, where access to two temporary tables with\nthe same name from different sessions can be identified by the actual\nschema name easily.\n\nI am not sure whether we should change explain output only for the\nsake of stable tests.\n\nYou could add a flag to EXPLAIN to mask pg_temp name but that's\nprobably an overkill. Filtering is a better option for tests.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 27 Apr 2021 18:59:09 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Skip temporary table schema name from explain-verbose output." }, { "msg_contents": "On Tue, Apr 27, 2021 at 6:59 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Tue, Apr 27, 2021 at 12:23 PM Amul Sul <sulamul@gmail.com> wrote:\n> > >\n> > > How about using an explain filter to replace the unstable text\n> > > pg_temp_3 to pg_temp_N instead of changing it in the core? Following\n> > > are the existing explain filters: explain_filter,\n> > > explain_parallel_append, explain_analyze_without_memory,\n> > > explain_resultcache, explain_parallel_sort_stats, explain_sq_limit.\n> > >\n> >\n> > Well, yes eventually, that will be the kludge. I was wondering if that\n> > table is accessible in a query via pg_temp schema then why should\n> > bother about printing the pg_temp_N schema name which is an internal\n> > purpose.\n>\n> Although only the associated session can access objects from that\n> schema, I think, the entries in pg_class have different namespace oids\n> and are accessible from other sessions. So knowing the actual schema\n> name is useful for debugging purposes. Using auto_explain, the explain\n> output goes to server log, where access to two temporary tables with\n> the same name from different sessions can be identified by the actual\n> schema name easily.\n>\n> I am not sure whether we should change explain output only for the\n> sake of stable tests.\n\nI agree to not change the explain code, just for tests.\n\n> You could add a flag to EXPLAIN to mask pg_temp name but that's\n> probably an overkill.\n\nIMO, you are right, it will be an overkill. We might end up having\nrequests to add flags for other cases as well.\n\n> Filtering is a better option for tests.\n\n+1. EXPLAIN output filtering is not something new, we have already\nstabilized a few tests.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Apr 2021 19:08:19 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Skip temporary table schema name from explain-verbose output." }, { "msg_contents": "On Tue, Apr 27, 2021 at 7:08 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Apr 27, 2021 at 6:59 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > On Tue, Apr 27, 2021 at 12:23 PM Amul Sul <sulamul@gmail.com> wrote:\n> > > >\n> > > > How about using an explain filter to replace the unstable text\n> > > > pg_temp_3 to pg_temp_N instead of changing it in the core? Following\n> > > > are the existing explain filters: explain_filter,\n> > > > explain_parallel_append, explain_analyze_without_memory,\n> > > > explain_resultcache, explain_parallel_sort_stats, explain_sq_limit.\n> > > >\n> > >\n> > > Well, yes eventually, that will be the kludge. I was wondering if that\n> > > table is accessible in a query via pg_temp schema then why should\n> > > bother about printing the pg_temp_N schema name which is an internal\n> > > purpose.\n> >\n> > Although only the associated session can access objects from that\n> > schema, I think, the entries in pg_class have different namespace oids\n> > and are accessible from other sessions. So knowing the actual schema\n> > name is useful for debugging purposes. Using auto_explain, the explain\n> > output goes to server log, where access to two temporary tables with\n> > the same name from different sessions can be identified by the actual\n> > schema name easily.\n> >\n\nMake sense, we would lose the ability to differentiate temporary\ntables from the auto_explain logs.\n\nRegards,\nAmul\n\n\n", "msg_date": "Wed, 28 Apr 2021 09:23:00 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Skip temporary table schema name from explain-verbose output." }, { "msg_contents": "> On Tue, Apr 27, 2021 at 7:08 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail\n> Make sense, we would lose the ability to differentiate temporary\n> tables from the auto_explain logs.\n\nThere's no useful differentiation that can be done with the temp\nschema name. They're assigned on connection start randomly from the\npool of temp schemas. The names you find in the log won't be useful\nand as new connections are made the same schema names will be reused\nfor different connections.\n\nI would say it makes sense to remove them -- except perhaps it makes\nit harder to parse explain output. If explain verbose always includes\nthe schema then it's easier for a parser to make sense of the explain\nplan output without having to be prepared to sometimes see a schema\nand sometimes not. That's probably a pretty hypothetical concern\nhowever since all the explain plan parsers that actually exist are\nprepared to deal with non-verbose plans anyways. And we have actual\nmachine-readable formats too anyways.\n\n\n-- \ngreg\n\n\n", "msg_date": "Wed, 28 Apr 2021 10:16:35 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Skip temporary table schema name from explain-verbose output." }, { "msg_contents": "Greg Stark <stark@mit.edu> writes:\n>> On Tue, Apr 27, 2021 at 7:08 PM Bharath Rupireddy\n>> <bharath.rupireddyforpostgres@gmail\n>> Make sense, we would lose the ability to differentiate temporary\n>> tables from the auto_explain logs.\n\n> There's no useful differentiation that can be done with the temp\n> schema name.\n\nAgreed.\n\n> I would say it makes sense to remove them -- except perhaps it makes\n> it harder to parse explain output.\n\nI don't think we should remove them. However, it could make sense to\nprint the \"pg_temp\" alias instead of the real schema name when we\nare talking about myTempNamespace. Basically try to make that alias\na bit less leaky.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 28 Apr 2021 10:26:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Skip temporary table schema name from explain-verbose output." }, { "msg_contents": "On Wed, Apr 28, 2021 at 7:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Greg Stark <stark@mit.edu> writes:\n> >> On Tue, Apr 27, 2021 at 7:08 PM Bharath Rupireddy\n> >> <bharath.rupireddyforpostgres@gmail\n> >> Make sense, we would lose the ability to differentiate temporary\n> >> tables from the auto_explain logs.\n>\n> > There's no useful differentiation that can be done with the temp\n> > schema name.\n>\nI see.\n\n> Agreed.\n>\n> > I would say it makes sense to remove them -- except perhaps it makes\n> > it harder to parse explain output.\n>\n> I don't think we should remove them. However, it could make sense to\n> print the \"pg_temp\" alias instead of the real schema name when we\n> are talking about myTempNamespace. Basically try to make that alias\n> a bit less leaky.\n\n+1, let's replace it by \"pg_temp\" -- did the same in that attached 0001 patch.\n\nAlso, I am wondering if we need a similar kind of handling in psql\n'\\d' meta-command as well? I did trial changes in the 0002 patch, but\nI am not very sure about it & a bit skeptical for code change as\nwell. Do let me know if you have any suggestions/thoughts or if we\ndon't want to, so please ignore that patch, thanks.\n\nRegards,\nAmul", "msg_date": "Thu, 29 Apr 2021 12:46:43 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Skip temporary table schema name from explain-verbose output." }, { "msg_contents": "On Thu, 29 Apr 2021 at 08:17, Amul Sul <sulamul@gmail.com> wrote:\n> On Wed, Apr 28, 2021 at 7:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I don't think we should remove them. However, it could make sense to\n> > print the \"pg_temp\" alias instead of the real schema name when we\n> > are talking about myTempNamespace. Basically try to make that alias\n> > a bit less leaky.\n>\n> +1, let's replace it by \"pg_temp\" -- did the same in that attached 0001 patch.\n\nSounds like a good change.\n\nSurely we need a test to exercise this works? Otherwise ready...\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 26 Jul 2021 13:53:20 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Skip temporary table schema name from explain-verbose output." }, { "msg_contents": "Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> Sounds like a good change.\n> Surely we need a test to exercise this works? Otherwise ready...\n\nThere are lots of places in the core regression tests where we'd have\nused a temp table, except that we needed to do EXPLAIN and the results\nwould've been unstable, so we used a short-lived plain table instead.\nFind one of those and change it to use a temp table.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Jul 2021 10:21:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Skip temporary table schema name from explain-verbose output." }, { "msg_contents": "I wrote:\n> Simon Riggs <simon.riggs@enterprisedb.com> writes:\n>> Surely we need a test to exercise this works? Otherwise ready...\n\n> There are lots of places in the core regression tests where we'd have\n> used a temp table, except that we needed to do EXPLAIN and the results\n> would've been unstable, so we used a short-lived plain table instead.\n> Find one of those and change it to use a temp table.\n\nHmm ... I looked through the core regression tests' usages of EXPLAIN\nVERBOSE and didn't really find any that it seemed to make sense to change\nthat way. I guess we've been more effective at programming around that\nrestriction than I thought.\n\nAnyway, after looking at the 0001 patch, I think there's a pretty large\noversight in that it doesn't touch ruleutils.c, although EXPLAIN relies\nheavily on that to print expressions and suchlike. We could account\nfor that as in the attached revision of 0001.\n\nHowever, I wonder whether this isn't going in the wrong direction.\nInstead of piecemeal s/get_namespace_name/get_namespace_name_or_temp/,\nwe should consider just putting this behavior right into\nget_namespace_name, and dropping the separate get_namespace_name_or_temp\nfunction. I can't really see any situation in which it's important\nto report the exact schema name of our own temp schema.\n\nOn the other hand, I don't like 0002 one bit, because it's not accounting\nfor whether the temp schema it's mangling is *our own* temp schema or some\nother session's. I do not think it is wise or even safe to report some\nother temp schema as being \"pg_temp\". By the same token, I wonder whether\nthis bit in event_trigger.c is a good idea or a safety hazard:\n\n /* XXX not quite get_namespace_name_or_temp */\n if (isAnyTempNamespace(schema_oid))\n schema = pstrdup(\"pg_temp\");\n else\n schema = get_namespace_name(schema_oid);\n\nAlvaro, you seem to be responsible for both the existence of the separate\nget_namespace_name_or_temp function and the fact that it's being avoided\nhere. I wonder what you think about this.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 26 Jul 2021 12:49:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Skip temporary table schema name from explain-verbose output." }, { "msg_contents": "On Mon, 26 Jul 2021 at 17:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> >> Surely we need a test to exercise this works? Otherwise ready...\n>\n> > There are lots of places in the core regression tests where we'd have\n> > used a temp table, except that we needed to do EXPLAIN and the results\n> > would've been unstable, so we used a short-lived plain table instead.\n> > Find one of those and change it to use a temp table.\n>\n> Hmm ... I looked through the core regression tests' usages of EXPLAIN\n> VERBOSE and didn't really find any that it seemed to make sense to change\n> that way. I guess we've been more effective at programming around that\n> restriction than I thought.\n>\n> Anyway, after looking at the 0001 patch, I think there's a pretty large\n> oversight in that it doesn't touch ruleutils.c, although EXPLAIN relies\n> heavily on that to print expressions and suchlike. We could account\n> for that as in the attached revision of 0001.\n>\n> However, I wonder whether this isn't going in the wrong direction.\n> Instead of piecemeal s/get_namespace_name/get_namespace_name_or_temp/,\n> we should consider just putting this behavior right into\n> get_namespace_name, and dropping the separate get_namespace_name_or_temp\n> function. I can't really see any situation in which it's important\n> to report the exact schema name of our own temp schema.\n\nThat sounds much better because any wholesale change like that will\naffect 100s of places in extensions and it would be easier if we made\njust one change in core.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 26 Jul 2021 18:01:19 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Skip temporary table schema name from explain-verbose output." }, { "msg_contents": "On Mon, Jul 26, 2021 at 12:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I can't really see any situation in which it's important\n> to report the exact schema name of our own temp schema.\n\nIt would actually be nice if there were some easy way of getting that\nfor the rare situations in which there are problems. For example, if\nthe catalog entries get corrupted and you can't access some table in\nyour pg_temp schema, you might like to know which pg_temp schema\nyou've got so that you can be sure to examine the right catalog\nentries to fix the problem or understand the problem or whatever you\nare trying to do. I don't much care exactly how we go about making\nthat information available and I agree that showing pg_temp_NNN in\nEXPLAIN output is worse than just pg_temp. I'm just saying that\nconcealing too thoroughly what is actually happening can be a problem\nin the rare instance where troubleshooting is required.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 26 Jul 2021 13:15:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Skip temporary table schema name from explain-verbose output." }, { "msg_contents": "On 2021-Jul-26, Tom Lane wrote:\n\n> Alvaro, you seem to be responsible for both the existence of the separate\n> get_namespace_name_or_temp function and the fact that it's being avoided\n> here. I wonder what you think about this.\n\nThe reason I didn't touch get_namespace_name then (e9a077cad379) was\nthat I didn't want to change the user-visible behavior for any existing\nfeatures; I was just after a way to implement dropped-object DDL trigger\ntracking. If we agree that displaying pg_temp instead of pg_temp_XXX\neverywhere is an improvement, then I don't see a reason not to change\nhow get_namespace_name works and get rid of get_namespace_name_or_temp.\n\nI don't see much usefulness in displaying the exact name of the temp\nnamespace anywhere, particularly since using \"pg_temp\" as a\nqualification in queries already refers to the current backend's temp\nnamespace. Trying to refer to it by exact name in SQL may lead to\naffecting some other backend's temp objects ...\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 26 Jul 2021 13:30:42 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Skip temporary table schema name from explain-verbose output." }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Jul 26, 2021 at 12:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I can't really see any situation in which it's important\n>> to report the exact schema name of our own temp schema.\n\n> It would actually be nice if there were some easy way of getting that\n> for the rare situations in which there are problems.\n\nI experimented with pushing the behavior into get_namespace_name,\nand immediately ran into problems, for example\n\n--- /home/postgres/pgsql/src/test/regress/expected/jsonb.out 2021-03-01 16:32\n:13.348655633 -0500\n+++ /home/postgres/pgsql/src/test/regress/results/jsonb.out 2021-07-26 13:10\n:53.523540855 -0400\n@@ -320,11 +320,9 @@\n where tablename = 'rows' and\n schemaname = pg_my_temp_schema()::regnamespace::text\n order by 1;\n- attname | histogram_bounds \n----------+--------------------------\n- x | [1, 2, 3]\n- y | [\"txt1\", \"txt2\", \"txt3\"]\n-(2 rows)\n+ attname | histogram_bounds \n+---------+------------------\n+(0 rows)\n \n -- to_jsonb, timestamps\n select to_jsonb(timestamp '2014-05-28 12:22:35.614298');\n\nWhat's happening here is that regnamespace_out is returning\n'pg_temp' which doesn't match any name visible in pg_namespace.\nSo that would pretty clearly break user queries as well as\nour own tests. I'm afraid that the wholesale behavior change\nI was imagining isn't going to work. Probably we'd better stick\nto doing something close to the v2 patch I posted.\n\nI'm still suspicious of that logic in event_trigger.c, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Jul 2021 13:33:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Skip temporary table schema name from explain-verbose output." }, { "msg_contents": "On 2021-Jul-26, Tom Lane wrote:\n\n> On the other hand, I don't like 0002 one bit, because it's not accounting\n> for whether the temp schema it's mangling is *our own* temp schema or some\n> other session's. I do not think it is wise or even safe to report some\n> other temp schema as being \"pg_temp\". By the same token, I wonder whether\n> this bit in event_trigger.c is a good idea or a safety hazard:\n> \n> /* XXX not quite get_namespace_name_or_temp */\n> if (isAnyTempNamespace(schema_oid))\n> schema = pstrdup(\"pg_temp\");\n> else\n> schema = get_namespace_name(schema_oid);\n\nOh, you meant this one. To be honest I don't remember *why* this code\nwants to show remote temp tables as just \"pg_temp\" ... it's possible\nthat some test in the DDL-to-JSON code depended on this behavior.\nWithout spending too much time analyzing it, I agree that it seems\ndangerous and might lead to referring to unintended objects. (Really,\nmy memory is not clear on *why* we would be referring to temp tables of\nother sessions.)\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"No necesitamos banderas\n No reconocemos fronteras\" (Jorge González)\n\n\n", "msg_date": "Mon, 26 Jul 2021 13:53:51 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Skip temporary table schema name from explain-verbose output." }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Oh, you meant this one. To be honest I don't remember *why* this code\n> wants to show remote temp tables as just \"pg_temp\" ... it's possible\n> that some test in the DDL-to-JSON code depended on this behavior.\n> Without spending too much time analyzing it, I agree that it seems\n> dangerous and might lead to referring to unintended objects. (Really,\n> my memory is not clear on *why* we would be referring to temp tables of\n> other sessions.)\n\nYeah, it's not very clear why that would happen, but if it does,\nshowing \"pg_temp\" seems pretty misleading. I tried replacing the\ncode with just get_namespace_name_or_temp(), and it still gets\nthrough check-world, for whatever that's worth.\n\nI'm inclined to change this in HEAD but leave it alone in the back\nbranches. While it seems pretty bogus, it's not clear if anyone\nout there could be relying on the current behavior.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Jul 2021 14:30:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Skip temporary table schema name from explain-verbose output." }, { "msg_contents": "I wrote:\n> I'm inclined to change this in HEAD but leave it alone in the back\n> branches. While it seems pretty bogus, it's not clear if anyone\n> out there could be relying on the current behavior.\n\nI've pushed both the 0001 v2 patch and the event trigger change,\nand am going to mark the CF entry closed, because leaving it open\nwould confuse the cfbot. I think there may still be room to do\nsomething about pg_temp_NNN output in psql's \\d commands as 0002\nattempted to, but I don't have immediate ideas about how to do\nthat in a safe/clean way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 27 Jul 2021 12:12:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Skip temporary table schema name from explain-verbose output." } ]
[ { "msg_contents": "Hi all,\n\nI have been looking at and testing the patch set for CREATE TABLE\nCOMPRESSION, and spotted a couple of things in parallel of some work\ndone by Jacob (added in CC).\n\nThe behavior around CREATE TABLE AS and matviews is a bit confusing,\nand not documented. First, at the grammar level, it is not possible\nto specify which compression option is used per column when creating\nthe relation. So, all the relation columns would just set a column's\ncompression to be default_toast_compression for all the toastable\ncolumns of the relation. That's not enforceable at column level when\nthe relation is created, except with a follow-up ALTER TABLE. That's\nsimilar to STORAGE when it comes to matviews, but these are at least\ndocumented.\n\nAnd so, ALTER MATERIALIZED VIEW supports SET COMPRESSION but this is\nnot mentioned in its docs:\nhttps://www.postgresql.org/docs/devel/sql-altermaterializedview.html\npsql could have tab completion support for that.\n\nThere are no tests in pg_dump to make sure that some ALTER\nMATERIALIZED VIEW or ALTER TABLE commands are generated when the\ncompression of a matview's or table's column is changed. This depends\non the value of default_toast_compression, but that would be nice to\nhave something, and get at least some coverage with\n--no-toast-compression. You would need to make the tests conditional\nhere, for example with check_pg_config() (see for example what's done\nwith channel binding in ssl/t/002_scram.pl).\n\nAnother thing is the handling of the per-value compression that could\nbe confusing to the user. As no materialization of the data is done\nfor a CTAS or a matview, and the regression tests of compression.sql\ntrack that AFAIK, there can be a mix of toast values compressed with\nlz4 or pglz, with pg_attribute.attcompression being one or the other.\n\nNow, we don't really document any of that, and the per-column\ncompression value would be set to default_toast_compression while the\nstored values may use a mix of the compression methods, depending on\nwhere the toasted values come from. If this behavior is intended, this\nmakes me wonder in what the possibility to set the compression for a\nmaterialized view column is useful for except for a logical\ndump/restore? As of HEAD we'd just insert the toasted value from the\norigin as-is so the compression of the column has no effect at all.\nAnother thing here is the inconsistency that this brings with pg_dump.\nFor example, as the dumped values are decompressed, we could have\nvalues compressed with pglz at the origin, with a column using lz4\nwithin its definition that would make everything compressed with lz4\nonce the values are restored. This choice may be fine, but I think\nthat it would be good to document all that. That would be less\nsurprising to the user.\n\nSimilarly, we may want to document that COMPRESSION does not trigger a\ntable rewrite, but that it is effective only for the new toast values\ninserted if a tuple is rebuilt and rewritten?\n\nWould it be better to document that pg_column_compression() returns\nNULL if the column is not a toastable type or if the column's value is\nnot compressed?\n\nThe flexibility with allow_system_table_mods allows one to change the\ncompression method of catalogs, for example switching rolpassword with\na SCRAM verifier large enough to be toasted would lock an access to\nthe cluster if restarting the server without lz4 support. I shouldn't\nhave done that but I did, and I like it :)\n\nThe design used by this feature is pretty cool, as long as you don't\nread the compressed values, physical replication can work out of the\nbox even across nodes that are built with or without lz4.\n\nThanks,\n--\nMichael", "msg_date": "Tue, 27 Apr 2021 15:22:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Small issues with CREATE TABLE COMPRESSION" }, { "msg_contents": "On Tue, Apr 27, 2021 at 03:22:25PM +0900, Michael Paquier wrote:\n> Hi all,\n> \n> I have been looking at and testing the patch set for CREATE TABLE\n> COMPRESSION, and spotted a couple of things in parallel of some work\n> done by Jacob (added in CC).\n> \n> The behavior around CREATE TABLE AS and matviews is a bit confusing,\n> and not documented. First, at the grammar level, it is not possible\n> to specify which compression option is used per column when creating\n> the relation. So, all the relation columns would just set a column's\n> compression to be default_toast_compression for all the toastable\n> columns of the relation. That's not enforceable at column level when\n> the relation is created, except with a follow-up ALTER TABLE. That's\n> similar to STORAGE when it comes to matviews, but these are at least\n> documented.\n> \n> And so, ALTER MATERIALIZED VIEW supports SET COMPRESSION but this is\n> not mentioned in its docs:\n> https://www.postgresql.org/docs/devel/sql-altermaterializedview.html\n>\n> psql could have tab completion support for that.\n\nActually ALTER matview ALTER col has no tab completion at all, right ?\n\n> Now, we don't really document any of that, and the per-column\n> compression value would be set to default_toast_compression while the\n> stored values may use a mix of the compression methods, depending on\n> where the toasted values come from. If this behavior is intended, this\n> makes me wonder in what the possibility to set the compression for a\n> materialized view column is useful for except for a logical\n> dump/restore? As of HEAD we'd just insert the toasted value from the\n> origin as-is so the compression of the column has no effect at all.\n\nThat may be true if the mat view is trivial, but not true if it has\nexpressions. The mat view column may be built on multiple table columns, or be\nof a different type than the columns it's built on top of, so the relationship\nmay not be so direct.\n\n> Another thing here is the inconsistency that this brings with pg_dump.\n> For example, as the dumped values are decompressed, we could have\n> values compressed with pglz at the origin, with a column using lz4\n> within its definition that would make everything compressed with lz4\n> once the values are restored. This choice may be fine, but I think\n> that it would be good to document all that. That would be less\n> surprising to the user.\n\nCan you suggest what or where we'd say it? I think this is just the behavior\nthat pg_dump shouldn't lose the user's compression setting.\n\nThe setting itself is for \"future\" data, and the only way to guarantee what\ncompression types are in use are by vacuum full/cluster or pg_dump restore.\n\n> Similarly, we may want to document that COMPRESSION does not trigger a\n> table rewrite, but that it is effective only for the new toast values\n> inserted if a tuple is rebuilt and rewritten?\n\nGood point. I started with this.\n\ndiff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml\nindex 39927be41e..8cceea41d0 100644\n--- a/doc/src/sgml/ref/alter_table.sgml\n+++ b/doc/src/sgml/ref/alter_table.sgml\n@@ -391,7 +391,21 @@ WITH ( MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>, REM\n </term>\n <listitem>\n <para>\n- This sets the compression method for a column. The supported compression\n+ This sets the compression method to be used for data inserted into a column.\n+\n+ This does not cause the table to be rewritten, so existing data may still\n+ be compressed with other compression methods. If the table is rewritten with\n+ <command>VACUUM FULL</command> or <command>CLUSTER</command>, or restored\n+ with <application>pg_restore</application>, then all tuples are rewritten\n+ with the configured compression methods.\n+\n+ Also, note that when data is inserted from another relation (for example,\n+ by <command>INSERT ... SELECT</command>), tuples from the source data are\n+ not necessarily detoasted, and any previously compressed data is retained\n+ with its existing compression method, rather than recompressing with the\n+ compression methods of the target columns.\n+\n+ The supported compression\n methods are <literal>pglz</literal> and <literal>lz4</literal>.\n <literal>lz4</literal> is available only if <literal>--with-lz4</literal>\n was used when building <productname>PostgreSQL</productname>.\n\n\n", "msg_date": "Wed, 28 Apr 2021 23:01:33 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Small issues with CREATE TABLE COMPRESSION" }, { "msg_contents": "Hi,\n\nMy analysis of this open item is there are no code-level issues here,\nbut there is one line of documentation that clearly got forgotten,\nsome other documentation changes that might be nice, and maybe someone\nwants to work more on testing and/or tab completion at some point.\n\nOn Tue, Apr 27, 2021 at 2:22 AM Michael Paquier <michael@paquier.xyz> wrote:\n> The behavior around CREATE TABLE AS and matviews is a bit confusing,\n> and not documented.\n\nIt's no different from a bunch of other column properties that you\nalso can't set when creating a materialized view. I don't think this\npatch created this problem, or that it is responsible for solving it.\n\n> And so, ALTER MATERIALIZED VIEW supports SET COMPRESSION but this is\n> not mentioned in its docs:\n\nI agree that's an oversight and should be fixed.\n\n> https://www.postgresql.org/docs/devel/sql-altermaterializedview.html\n> psql could have tab completion support for that.\n\nI don't believe it's our policy that incomplete tab completion support\nrises to the level of an open item, especially given that, as Justin\npoints out, it's not supported for ALTER MATERIALIZED VIEW name ALTER\nCOLUMN name <tab> doesn't complete anything *at all*.\n\n> There are no tests in pg_dump to make sure that some ALTER\n> MATERIALIZED VIEW or ALTER TABLE commands are generated when the\n> compression of a matview's or table's column is changed.\n\nTrue, but it does seem to work. I am happy if you or anyone want to\nwrite some tests.\n\n> Another thing is the handling of the per-value compression that could\n> be confusing to the user. As no materialization of the data is done\n> for a CTAS or a matview, and the regression tests of compression.sql\n> track that AFAIK, there can be a mix of toast values compressed with\n> lz4 or pglz, with pg_attribute.attcompression being one or the other.\n\nYes. This is mentioned in the commit message, and was discussed\nextensively on the original thread. We probably should have included\nit in the documentation, as well. Justin's text seems fairly\nreasonable to me.\n\n> Similarly, we may want to document that COMPRESSION does not trigger a\n> table rewrite, but that it is effective only for the new toast values\n> inserted if a tuple is rebuilt and rewritten?\n\nSure. I think Justin's text covers this, too.\n\n> Would it be better to document that pg_column_compression() returns\n> NULL if the column is not a toastable type or if the column's value is\n> not compressed?\n\nWe can.\n\nHere's a proposed patch for the documentation issues not covered by\nJustin's proposal.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 4 May 2021 14:36:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Small issues with CREATE TABLE COMPRESSION" }, { "msg_contents": "On Thu, Apr 29, 2021 at 9:31 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Tue, Apr 27, 2021 at 03:22:25PM +0900, Michael Paquier wrote:\n> > Hi all,\n>\n> > And so, ALTER MATERIALIZED VIEW supports SET COMPRESSION but this is\n> > not mentioned in its docs:\n> > https://www.postgresql.org/docs/devel/sql-altermaterializedview.html\n> >\n> > psql could have tab completion support for that.\n>\n> Actually ALTER matview ALTER col has no tab completion at all, right ?\n\nRight.\n\n> Good point. I started with this.\n>\n> diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml\n> index 39927be41e..8cceea41d0 100644\n> --- a/doc/src/sgml/ref/alter_table.sgml\n> +++ b/doc/src/sgml/ref/alter_table.sgml\n> @@ -391,7 +391,21 @@ WITH ( MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>, REM\n> </term>\n> <listitem>\n> <para>\n> - This sets the compression method for a column. The supported compression\n> + This sets the compression method to be used for data inserted into a column.\n> +\n> + This does not cause the table to be rewritten, so existing data may still\n> + be compressed with other compression methods. If the table is rewritten with\n> + <command>VACUUM FULL</command> or <command>CLUSTER</command>, or restored\n> + with <application>pg_restore</application>, then all tuples are rewritten\n> + with the configured compression methods.\n> +\n> + Also, note that when data is inserted from another relation (for example,\n> + by <command>INSERT ... SELECT</command>), tuples from the source data are\n> + not necessarily detoasted, and any previously compressed data is retained\n> + with its existing compression method, rather than recompressing with the\n> + compression methods of the target columns.\n> +\n> + The supported compression\n> methods are <literal>pglz</literal> and <literal>lz4</literal>.\n> <literal>lz4</literal> is available only if <literal>--with-lz4</literal>\n> was used when building <productname>PostgreSQL</productname>.\n\nYour documentation looks fine to me.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 May 2021 10:43:11 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Small issues with CREATE TABLE COMPRESSION" }, { "msg_contents": "On Wed, May 5, 2021 at 12:06 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n\n> > There are no tests in pg_dump to make sure that some ALTER\n> > MATERIALIZED VIEW or ALTER TABLE commands are generated when the\n> > compression of a matview's or table's column is changed.\n>\n> True, but it does seem to work. I am happy if you or anyone want to\n> write some tests.\n\nI think it will be really hard to generate such a test in pg_dump,\nbecause default we are compiling --without-lz4, which means we have\nonly one compression option available, and if there is only one option\navailable then the column compression method and the default\ncompression method will be same so the dump will not generate an extra\ncommand of type ALTER TABLE... SET COMPRESSION.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 May 2021 11:02:05 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Small issues with CREATE TABLE COMPRESSION" }, { "msg_contents": "On Wed, May 5, 2021 at 11:02 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, May 5, 2021 at 12:06 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n>\n> > > There are no tests in pg_dump to make sure that some ALTER\n> > > MATERIALIZED VIEW or ALTER TABLE commands are generated when the\n> > > compression of a matview's or table's column is changed.\n> >\n> > True, but it does seem to work. I am happy if you or anyone want to\n> > write some tests.\n>\n> I think it will be really hard to generate such a test in pg_dump,\n> because default we are compiling --without-lz4, which means we have\n> only one compression option available, and if there is only one option\n> available then the column compression method and the default\n> compression method will be same so the dump will not generate an extra\n> command of type ALTER TABLE... SET COMPRESSION.\n\nI think we already have such test cases at least through pg_upgrade.\nBasically, if you see in compression.sql we are not dropping the table\nso that pg_upgrade and dump them and test. So if test run --with-lz4\nthen in pg_upgrade dump we can see ALTER TABLE... SET COMPRESSION type\nof commands.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 May 2021 13:41:03 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Small issues with CREATE TABLE COMPRESSION" }, { "msg_contents": "On Wed, May 05, 2021 at 01:41:03PM +0530, Dilip Kumar wrote:\n> I think we already have such test cases at least through pg_upgrade.\n> Basically, if you see in compression.sql we are not dropping the table\n> so that pg_upgrade and dump them and test. So if test run --with-lz4\n> then in pg_upgrade dump we can see ALTER TABLE... SET COMPRESSION type\n> of commands.\n\nThe TAP tests of pg_dump are much more picky than what pg_upgrade is\nable to do. With the existing set of tests in place, what you are\nable to detect is that pg_upgrade does not *break* if there are tables\nwith attributes using various compression types, but that would not be\nenough to make sure that the correct compression method is *applied*\ndepending on the context expected (default_toast_compression + the\nattribute compression + pg_dump options), which is what the TAP tests\nof pg_dump are able to correctly detect if extended in an appropriate\nway.\n\nWith what's on HEAD, we would easily miss any bugs introduced in\npg_dump that change the set of commands generated depending on the\noptions given by a user, but still allow pg_upgrade to work correctly.\nFor example, there could be issues where we finish by setting up the\nincorrect compression option, with pg_upgrade happily finishing.\nThere is a gap in the test coverage here.\n--\nMichael", "msg_date": "Wed, 5 May 2021 19:30:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Small issues with CREATE TABLE COMPRESSION" }, { "msg_contents": "On Wed, May 5, 2021 at 4:00 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, May 05, 2021 at 01:41:03PM +0530, Dilip Kumar wrote:\n> > I think we already have such test cases at least through pg_upgrade.\n> > Basically, if you see in compression.sql we are not dropping the table\n> > so that pg_upgrade and dump them and test. So if test run --with-lz4\n> > then in pg_upgrade dump we can see ALTER TABLE... SET COMPRESSION type\n> > of commands.\n>\n> The TAP tests of pg_dump are much more picky than what pg_upgrade is\n> able to do. With the existing set of tests in place, what you are\n> able to detect is that pg_upgrade does not *break* if there are tables\n> with attributes using various compression types, but that would not be\n> enough to make sure that the correct compression method is *applied*\n> depending on the context expected (default_toast_compression + the\n> attribute compression + pg_dump options), which is what the TAP tests\n> of pg_dump are able to correctly detect if extended in an appropriate\n> way.\n\n Okay, got your point.\n\n> With what's on HEAD, we would easily miss any bugs introduced in\n> pg_dump that change the set of commands generated depending on the\n> options given by a user, but still allow pg_upgrade to work correctly.\n> For example, there could be issues where we finish by setting up the\n> incorrect compression option, with pg_upgrade happily finishing.\n> There is a gap in the test coverage here.\n\nBasically, the problem is default compilation is --without-lz4 so by\ndefault there is only one compression method and with only one\ncompression method we can not generate the test case you asked for,\nbecause that will be the default compression method and we don't dump\nthe default compression method.\n\nSo basically, if we have to write this test case in pg_dump then we\nwill have to use lz4 which means it will generate different output\n--with-lz4 vs --without-lz4. With a simple regress test it easy to\ndeal with such cases by keeping multiple .out files but I am not sure\ncan we do this easily with pg_dump test without adding much\ncomplexity?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 May 2021 16:39:16 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Small issues with CREATE TABLE COMPRESSION" }, { "msg_contents": "On Wed, May 5, 2021 at 7:09 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> So basically, if we have to write this test case in pg_dump then we\n> will have to use lz4 which means it will generate different output\n> --with-lz4 vs --without-lz4. With a simple regress test it easy to\n> deal with such cases by keeping multiple .out files but I am not sure\n> can we do this easily with pg_dump test without adding much\n> complexity?\n\nTAP tests have a facility for conditionally skipping tests; see\nperldoc Test::More. That's actually superior to what you can do with\npg_regress. We'd need to come up with some logic to determine when to\nskip or not, though. Perhaps the easiest method would be to have the\nrelevant Perl script try to create a table with an lz4 column. If that\nworks, then perform the LZ4-based tests. If it fails, check the error\nmessage. If it says anything that LZ4 is not supported by this build,\nskip those tests. If it says anything else, die.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 May 2021 09:59:41 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Small issues with CREATE TABLE COMPRESSION" }, { "msg_contents": "On Wed, May 05, 2021 at 09:59:41AM -0400, Robert Haas wrote:\n> TAP tests have a facility for conditionally skipping tests; see\n> perldoc Test::More. That's actually superior to what you can do with\n> pg_regress. We'd need to come up with some logic to determine when to\n> skip or not, though. Perhaps the easiest method would be to have the\n> relevant Perl script try to create a table with an lz4 column. If that\n> works, then perform the LZ4-based tests. If it fails, check the error\n> message. If it says anything that LZ4 is not supported by this build,\n> skip those tests. If it says anything else, die.\n\nThere is a simpler and cheaper method to make the execution of TAP\ntest conditional. As in src/test/ssl/t/002_scram.pl for channel\nbinding, I think that you could use something like\ncheck_pg_config(\"#define HAVE_LIBLZ4 1\") and use its result to decide\nwhich tests to skip or not.\n--\nMichael", "msg_date": "Thu, 6 May 2021 09:05:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Small issues with CREATE TABLE COMPRESSION" }, { "msg_contents": "On Thu, May 6, 2021 at 5:35 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, May 05, 2021 at 09:59:41AM -0400, Robert Haas wrote:\n> > TAP tests have a facility for conditionally skipping tests; see\n> > perldoc Test::More. That's actually superior to what you can do with\n> > pg_regress. We'd need to come up with some logic to determine when to\n> > skip or not, though. Perhaps the easiest method would be to have the\n> > relevant Perl script try to create a table with an lz4 column. If that\n> > works, then perform the LZ4-based tests. If it fails, check the error\n> > message. If it says anything that LZ4 is not supported by this build,\n> > skip those tests. If it says anything else, die.\n>\n> There is a simpler and cheaper method to make the execution of TAP\n> test conditional. As in src/test/ssl/t/002_scram.pl for channel\n> binding, I think that you could use something like\n> check_pg_config(\"#define HAVE_LIBLZ4 1\") and use its result to decide\n> which tests to skip or not.\n\nThanks, Robert and Michael for your input. I will try to understand\nhow it is done in the example shared by you and come up with the test\nonce I get time. I assume this is not something urgent.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 May 2021 10:45:53 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Small issues with CREATE TABLE COMPRESSION" }, { "msg_contents": "On Thu, May 6, 2021 at 10:45 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\nI noticed that the error code for invalid compression method is not\nperfect, basically when we pass the invalid compression method during\nCREATE/ALTER table that time we give\nERRCODE_FEATURE_NOT_SUPPORTED. I think the correct error code is\nERRCODE_INVALID_PARAMETER_VALUE. I have attached a patch to fix this.\n\nI thought of starting a new thread first but then I thought the\nsubject of this thread is quite generic and this is a fairly small fix\nso we can use the same thread.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 6 May 2021 17:01:23 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Small issues with CREATE TABLE COMPRESSION" }, { "msg_contents": "On Thu, May 06, 2021 at 05:01:23PM +0530, Dilip Kumar wrote:\n> I noticed that the error code for invalid compression method is not\n> perfect, basically when we pass the invalid compression method during\n> CREATE/ALTER table that time we give\n> ERRCODE_FEATURE_NOT_SUPPORTED. I think the correct error code is\n> ERRCODE_INVALID_PARAMETER_VALUE. I have attached a patch to fix this.\n\nYeah, I agree that this is an improvement, so let's fix this.\n--\nMichael", "msg_date": "Thu, 6 May 2021 21:04:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Small issues with CREATE TABLE COMPRESSION" }, { "msg_contents": "On Thu, May 06, 2021 at 10:45:53AM +0530, Dilip Kumar wrote:\n> Thanks, Robert and Michael for your input. I will try to understand\n> how it is done in the example shared by you and come up with the test\n> once I get time. I assume this is not something urgent.\n\nThanks. FWIW, I'd rather see this gap closed asap, as features should\ncome with proper tests IMO.\n\nWhile on it, I can see that there is no support for --with-lz4 in the\nMSVC scripts. I think that this is something where we had better\nclose the gap, and upstream provides binaries on Windows on their\nrelease page:\nhttps://github.com/lz4/lz4/releases\n\nAnd I am familiar with both areas, so I have no helping out and\ngetting that in shape correctly before beta1.\n--\nMichael", "msg_date": "Thu, 6 May 2021 21:12:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Small issues with CREATE TABLE COMPRESSION" }, { "msg_contents": "On Thu, May 6, 2021 at 5:42 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, May 06, 2021 at 10:45:53AM +0530, Dilip Kumar wrote:\n> > Thanks, Robert and Michael for your input. I will try to understand\n> > how it is done in the example shared by you and come up with the test\n> > once I get time. I assume this is not something urgent.\n>\n> Thanks. FWIW, I'd rather see this gap closed asap, as features should\n> come with proper tests IMO.\n\nI have done this please find the attached patch.\n\n>\n> While on it, I can see that there is no support for --with-lz4 in the\n> MSVC scripts. I think that this is something where we had better\n> close the gap, and upstream provides binaries on Windows on their\n> release page:\n> https://github.com/lz4/lz4/releases\n>\n> And I am familiar with both areas, so I have no helping out and\n> getting that in shape correctly before beta1.\n\nI don't have much idea about the MSVC script, but I will try to see\nsome other parameters and fix this.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 6 May 2021 21:33:53 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Small issues with CREATE TABLE COMPRESSION" }, { "msg_contents": "On Thu, May 06, 2021 at 09:04:57PM +0900, Michael Paquier wrote:\n> Yeah, I agree that this is an improvement, so let's fix this.\n\nJust noticed that this was not applied yet, so done while I was\nlooking at this thread again.\n--\nMichael", "msg_date": "Sat, 8 May 2021 10:34:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Small issues with CREATE TABLE COMPRESSION" }, { "msg_contents": "On Sat, May 8, 2021 at 7:04 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, May 06, 2021 at 09:04:57PM +0900, Michael Paquier wrote:\n> > Yeah, I agree that this is an improvement, so let's fix this.\n>\n> Just noticed that this was not applied yet, so done while I was\n> looking at this thread again.\n\nThanks!\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 8 May 2021 11:18:48 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Small issues with CREATE TABLE COMPRESSION" }, { "msg_contents": "On Thu, May 06, 2021 at 09:33:53PM +0530, Dilip Kumar wrote:\n> On Thu, May 6, 2021 at 5:42 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Thu, May 06, 2021 at 10:45:53AM +0530, Dilip Kumar wrote:\n> > > Thanks, Robert and Michael for your input. I will try to understand\n> > > how it is done in the example shared by you and come up with the test\n> > > once I get time. I assume this is not something urgent.\n> >\n> > Thanks. FWIW, I'd rather see this gap closed asap, as features should\n> > come with proper tests IMO.\n> \n> I have done this please find the attached patch.\n\nNo objections to take the approach to mark the lz4-related test with a\nspecial flag and skip them. I have three comments:\n- It would be good to document this new flag. See the comment block\non top of %dump_test_schema_runs.\n- There should be a test for --no-toast-compression. You can add a\nnew command in %pgdump_runs, then unmatch the expected output with the\noption.\n- I would add one test case with COMPRESSION pglz somewhere to check\nafter the case of ALTER TABLE COMPRESSION commands not generated as\nthis depends on default_toast_compression. A second test I'd add is a\nmaterialized view with a column switched to use lz4 as compression\nmethod with an extra ALTER command in create_sql.\n\n> I don't have much idea about the MSVC script, but I will try to see\n> some other parameters and fix this.\n\nThanks! I can dive into that if that's an issue. Let's make things\ncompatible with what upstream provides, meaning that we should have\nsome documentation pointing to the location of their deliverables,\nequally to what we do for the Perl and OpenSSL dependencies for\nexample.\n--\nMichael", "msg_date": "Sat, 8 May 2021 17:37:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Small issues with CREATE TABLE COMPRESSION" }, { "msg_contents": "On Sat, May 08, 2021 at 05:37:58PM +0900, Michael Paquier wrote:\n> Thanks! I can dive into that if that's an issue. Let's make things\n> compatible with what upstream provides, meaning that we should have\n> some documentation pointing to the location of their deliverables,\n> equally to what we do for the Perl and OpenSSL dependencies for\n> example.\n\nDilip has sent me a patch set without adding pgsql-hackers in CC (I\nguess these will be available soon). Anyway, this patch included a\nchange to fix a hole in the installation docs, where --with-lz4 is not\nlisted yet. I have reviewed that stuff and found more\ninconsistencies in the docs, leading me to the attached:\n- The upstream project name is \"LZ4\", so we had better use the correct\nname when not referring to the option value used in CREATE/ALTER\nTABLE.\n- doc/src/sgml/installation.sgml misses a description for --with-lz4.\n\nWithout the Windows changes, I am finishing with the attached to close\nthe loop with the docs.\n\nThanks,\n--\nMichael", "msg_date": "Sat, 8 May 2021 22:13:09 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Small issues with CREATE TABLE COMPRESSION" }, { "msg_contents": "On Sat, May 08, 2021 at 10:13:09PM +0900, Michael Paquier wrote:\n> + You need <productname>LZ4</productname>, if you want to support\n> + the compression of data with this method; see\n> + <xref linkend=\"sql-createtable\"/>.\n\nI suggest to change \"the compression\" to \"compression\".\nI would write the whole thing like:\n| The LZ4 library is needed to support compression of data using that method...\n\n> + Build with <productname>LZ4</productname> compression support.\n> + This allows the use of <productname>LZ4</productname> for the\n> + compression of table data. \n\nremove \"the\"\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 8 May 2021 08:22:39 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Small issues with CREATE TABLE COMPRESSION" }, { "msg_contents": "| You need LZ4, if you want to support the compression of data with this method; see CREATE TABLE. \n\nI suggest that should reference guc-default-toast-compression instead of CREATE\nTABLE, since CREATE TABLE is large and very non-specific.\n\nAlso, in at least 3 places there's extraneous trailing whitespace.\nTwo of these should (I think) be a blank line.\n\n+ <xref linkend=\"sql-createtable\"/>.$\n+ </para>$\n+ </listitem>$\n+ $\n <listitem>$\n\n+ compression of table data. $\n+ </para>$\n+ </listitem>$\n+ </varlistentry>$\n+ $\n\n\n", "msg_date": "Sat, 8 May 2021 09:06:18 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Small issues with CREATE TABLE COMPRESSION" }, { "msg_contents": "On Sat, May 8, 2021 at 6:43 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, May 08, 2021 at 05:37:58PM +0900, Michael Paquier wrote:\n> > Thanks! I can dive into that if that's an issue. Let's make things\n> > compatible with what upstream provides, meaning that we should have\n> > some documentation pointing to the location of their deliverables,\n> > equally to what we do for the Perl and OpenSSL dependencies for\n> > example.\n>\n> Dilip has sent me a patch set without adding pgsql-hackers in CC (I\n> guess these will be available soon).\n\nMy bad.\n\n Anyway, this patch included a\n> change to fix a hole in the installation docs, where --with-lz4 is not\n> listed yet. I have reviewed that stuff and found more\n> inconsistencies in the docs, leading me to the attached:\n> - The upstream project name is \"LZ4\", so we had better use the correct\n> name when not referring to the option value used in CREATE/ALTER\n> TABLE.\n> - doc/src/sgml/installation.sgml misses a description for --with-lz4.\n>\n> Without the Windows changes, I am finishing with the attached to close\n> the loop with the docs.\n\nThanks for the changes. I will send the other patches soon, after\nremoving the doc part which you have already included here.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 8 May 2021 20:03:34 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Small issues with CREATE TABLE COMPRESSION" }, { "msg_contents": "On Sat, May 8, 2021 at 2:08 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, May 06, 2021 at 09:33:53PM +0530, Dilip Kumar wrote:\n> > On Thu, May 6, 2021 at 5:42 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Thu, May 06, 2021 at 10:45:53AM +0530, Dilip Kumar wrote:\n> > > > Thanks, Robert and Michael for your input. I will try to understand\n> > > > how it is done in the example shared by you and come up with the test\n> > > > once I get time. I assume this is not something urgent.\n> > >\n> > > Thanks. FWIW, I'd rather see this gap closed asap, as features should\n> > > come with proper tests IMO.\n> >\n> > I have done this please find the attached patch.\n>\n> No objections to take the approach to mark the lz4-related test with a\n> special flag and skip them. I have three comments:\n> - It would be good to document this new flag. See the comment block\n> on top of %dump_test_schema_runs.\n> - There should be a test for --no-toast-compression. You can add a\n> new command in %pgdump_runs, then unmatch the expected output with the\n> option.\n> - I would add one test case with COMPRESSION pglz somewhere to check\n> after the case of ALTER TABLE COMPRESSION commands not generated as\n> this depends on default_toast_compression. A second test I'd add is a\n> materialized view with a column switched to use lz4 as compression\n> method with an extra ALTER command in create_sql.\n\nI have fixed some of them, I could not write the test cases where we\nhave to ensure that 'ALTER TABLE COMPRESSION' is not appearing in the\ndump. Basically, I don't have knowledge of the perl language so even\nafter trying for some time I could not write those 2 test cases. I\nhave fixed the remaining comments.\n\n\n> > I don't have much idea about the MSVC script, but I will try to see\n> > some other parameters and fix this.\n>\n> Thanks! I can dive into that if that's an issue. Let's make things\n> compatible with what upstream provides, meaning that we should have\n> some documentation pointing to the location of their deliverables,\n> equally to what we do for the Perl and OpenSSL dependencies for\n> example.\n\nI have changed the documentation and also updated the Solution.pm. I\ncould not verify the windows build yet as I am not having windows\nenvironment.\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 8 May 2021 20:19:03 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Small issues with CREATE TABLE COMPRESSION" }, { "msg_contents": "On Sat, May 08, 2021 at 09:06:18AM -0500, Justin Pryzby wrote:\n> I suggest that should reference guc-default-toast-compression instead of CREATE\n> TABLE, since CREATE TABLE is large and very non-specific.\n\nYes, that's a better idea.\n\n> Also, in at least 3 places there's extraneous trailing whitespace.\n> Two of these should (I think) be a blank line.\n\nFixed these, and applied this doc patch as a first piece. Thanks for\nthe review.\n--\nMichael", "msg_date": "Mon, 10 May 2021 09:36:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Small issues with CREATE TABLE COMPRESSION" }, { "msg_contents": "On Sat, May 08, 2021 at 08:19:03PM +0530, Dilip Kumar wrote:\n> I have fixed some of them, I could not write the test cases where we\n> have to ensure that 'ALTER TABLE COMPRESSION' is not appearing in the\n> dump. Basically, I don't have knowledge of the perl language so even\n> after trying for some time I could not write those 2 test cases. I\n> have fixed the remaining comments.\n\nThanks. I have spent some time on that, and after adding some tests\nwith --no-toast-compression, I have applied this part.\n\nNow comes the last part of the thread: support for the build with\nMSVC. I have looked in details at the binaries provided by upstream\non its release page, but these are for msys and mingw, so MSVC won't\nwork with that.\n\nSaying that, the upstream code can be compiled across various MSVC\nversions, with 2010 being the oldest version supported, even if there\nis no compiled libraries provided on the release pages. The way of\ndoing things here is to compile the code by yourself after downloading\nthe source tarball, with liblz4.lib and liblz4.dll being the generated\nbits interesting for Postgres, so using\nhttps://github.com/lz4/lz4/releases as reference for the download\nlooks enough, still that requires some efforts from the users to be\nable to do that. Another trick is to use vcpkg, but the deliverables\ngenerated are named lz4.{dll,lib} which is inconsistent with the\nupstream naming liblz4.{dll,lib} (see Makefile.inc for the details).\nMy image of the whole thing is that this finishes by being a pain,\nstill that's possible, but that's similar with my experience with any\nother dependencies.\n\nI have been spending some time playing with the builds and that was\nworking nicely. Please note that you have missed an update in\nconfig_default.pl and not all the CFLAGS entries were present in\nGenerateFiles().\n\nIt may be nice to see if this stuff requires any adjustments for msys\nand mingw, but I don't have such environments at hand.\n\nAll that leads me to the updated version attached.\n\nThoughts?\n--\nMichael", "msg_date": "Mon, 10 May 2021 14:57:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Small issues with CREATE TABLE COMPRESSION" }, { "msg_contents": "On Mon, May 10, 2021 at 11:27 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, May 08, 2021 at 08:19:03PM +0530, Dilip Kumar wrote:\n> > I have fixed some of them, I could not write the test cases where we\n> > have to ensure that 'ALTER TABLE COMPRESSION' is not appearing in the\n> > dump. Basically, I don't have knowledge of the perl language so even\n> > after trying for some time I could not write those 2 test cases. I\n> > have fixed the remaining comments.\n>\n> Thanks. I have spent some time on that, and after adding some tests\n> with --no-toast-compression, I have applied this part.\n\nThanks!\n\n> Now comes the last part of the thread: support for the build with\n> MSVC. I have looked in details at the binaries provided by upstream\n> on its release page, but these are for msys and mingw, so MSVC won't\n> work with that.\n>\n> Saying that, the upstream code can be compiled across various MSVC\n> versions, with 2010 being the oldest version supported, even if there\n> is no compiled libraries provided on the release pages. The way of\n> doing things here is to compile the code by yourself after downloading\n> the source tarball, with liblz4.lib and liblz4.dll being the generated\n> bits interesting for Postgres, so using\n> https://github.com/lz4/lz4/releases as reference for the download\n> looks enough, still that requires some efforts from the users to be\n> able to do that. Another trick is to use vcpkg, but the deliverables\n> generated are named lz4.{dll,lib} which is inconsistent with the\n> upstream naming liblz4.{dll,lib} (see Makefile.inc for the details).\n> My image of the whole thing is that this finishes by being a pain,\n> still that's possible, but that's similar with my experience with any\n> other dependencies.\n\nEven I was confused about that's the reason I used liblz4_static.lib,\nbut I see you have changed to liblz4.lib to make it consistent I\nguess?\n\n> I have been spending some time playing with the builds and that was\n> working nicely. Please note that you have missed an update in\n> config_default.pl and not all the CFLAGS entries were present in\n> GenerateFiles().\n\nYeah, I have noticed, and thanks for changing that.\n\n> It may be nice to see if this stuff requires any adjustments for msys\n> and mingw, but I don't have such environments at hand.\n>\n> All that leads me to the updated version attached.\n>\n> Thoughts?\n\nPatch looks good to me, I can not verify though because I don't have\nsuch an environment. Thanks for improving the patch.\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 10 May 2021 12:17:19 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Small issues with CREATE TABLE COMPRESSION" }, { "msg_contents": "On Mon, May 10, 2021 at 12:17:19PM +0530, Dilip Kumar wrote:\n> Even I was confused about that's the reason I used liblz4_static.lib,\n> but I see you have changed to liblz4.lib to make it consistent I\n> guess?\n\nThat's the name the upstream code is using, yes.\n\n> Patch looks good to me, I can not verify though because I don't have\n> such an environment. Thanks for improving the patch.\n\nThanks, I got that applied to finish the work of this thread for\nbeta1. At least this will give people an option to test LZ4 on\nWindows. Perhaps this will require some adjustments, but let's see if\nthat's necessary when that comes up.\n--\nMichael", "msg_date": "Tue, 11 May 2021 10:48:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Small issues with CREATE TABLE COMPRESSION" }, { "msg_contents": "On Tue, May 11, 2021 at 7:19 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> > Patch looks good to me, I can not verify though because I don't have\n> > such an environment. Thanks for improving the patch.\n>\n> Thanks, I got that applied to finish the work of this thread for\n> beta1. At least this will give people an option to test LZ4 on\n> Windows. Perhaps this will require some adjustments, but let's see if\n> that's necessary when that comes up.\n\nThanks, make sense.\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 11 May 2021 09:56:47 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Small issues with CREATE TABLE COMPRESSION" } ]
[ { "msg_contents": "Hi,\n\nWhile reviewing [1], I found that the CREATE COLLATION doesn't throw an\nerror if duplicate options are specified, see [2] for testing a few cases\non HEAD. This may end up accepting some of the weird cases, see [2]. It's\nagainst other option checking code in the server where the duplicate option\nis detected and an error thrown if found one. Attached a patch doing that.\nI chose to have the error message \"option \\\"%s\\\" specified more than once\"\nand parser_errposition because that's kind of agreed in [3].\n\nThoughts?\n\n[1]\nhttps://www.postgresql.org/message-id/CALj2ACWVd%3D-E6uG5AdHD0MvHY6e4mVzkapT%3DvLDnJJseGjaJLQ%40mail.gmail.com\n\n[2]\nCREATE COLLATION coll_dup_chk (LC_COLLATE = \"POSIX\", LC_COLLATE =\n\"NONSENSE\", LC_CTYPE = \"POSIX\"); -- ERROR\nCREATE COLLATION coll_dup_chk (LC_COLLATE = \"NONSENSE\", LC_COLLATE =\n\"POSIX\", LC_CTYPE = \"POSIX\"); -- OK but it's weird\nCREATE COLLATION coll_dup_chk (LC_CTYPE = \"POSIX\", LC_CTYPE = \"NONSENSE\",\nLC_COLLATE = \"POSIX\"); -- ERROR\nCREATE COLLATION coll_dup_chk (LC_CTYPE = \"NONSENSE\", LC_CTYPE = \"POSIX\",\nLC_COLLATE = \"POSIX\",); -- OK but it's weird\nCREATE COLLATION coll_dup_chk (PROVIDER = icu, PROVIDER = NONSENSE,\nLC_COLLATE = \"POSIX\", LC_CTYPE = \"POSIX\"); -- ERROR\nCREATE COLLATION coll_dup_chk (PROVIDER = NONSENSE, PROVIDER = icu,\nLC_COLLATE = \"POSIX\", LC_CTYPE = \"POSIX\"); -- OK but it's weird\nCREATE COLLATION case_sensitive (LOCALE = '', LOCALE = \"NONSENSE\"); -- ERROR\nCREATE COLLATION coll_dup_chk (LOCALE = \"NONSENSE\", LOCALE = ''); -- OK but\nit's weird\nCREATE COLLATION coll_dup_chk (DETERMINISTIC = TRUE, DETERMINISTIC =\nNONSENSE, LOCALE = ''); -- ERROR\nCREATE COLLATION coll_dup_chk (DETERMINISTIC = NONSENSE, DETERMINISTIC =\nTRUE, LOCALE = ''); -- OK but it's weird\n\n[3]\nhttps://www.postgresql.org/message-id/CALj2ACUa%3DZM8QtOLPCHc7%3DWgFrx9P6-AgKQs8cmKLvNCvu7arQ%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 27 Apr 2021 15:21:06 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "CREATE COLLATION - check for duplicate options and error out if found\n one" }, { "msg_contents": "On Tue, Apr 27, 2021 at 3:21 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> While reviewing [1], I found that the CREATE COLLATION doesn't throw an error if duplicate options are specified, see [2] for testing a few cases on HEAD. This may end up accepting some of the weird cases, see [2]. It's against other option checking code in the server where the duplicate option is detected and an error thrown if found one. Attached a patch doing that. I chose to have the error message \"option \\\"%s\\\" specified more than once\" and parser_errposition because that's kind of agreed in [3].\n>\n> Thoughts?\n>\n> [1] https://www.postgresql.org/message-id/CALj2ACWVd%3D-E6uG5AdHD0MvHY6e4mVzkapT%3DvLDnJJseGjaJLQ%40mail.gmail.com\n>\n> [2]\n> CREATE COLLATION coll_dup_chk (LC_COLLATE = \"POSIX\", LC_COLLATE = \"NONSENSE\", LC_CTYPE = \"POSIX\"); -- ERROR\n> CREATE COLLATION coll_dup_chk (LC_COLLATE = \"NONSENSE\", LC_COLLATE = \"POSIX\", LC_CTYPE = \"POSIX\"); -- OK but it's weird\n> CREATE COLLATION coll_dup_chk (LC_CTYPE = \"POSIX\", LC_CTYPE = \"NONSENSE\", LC_COLLATE = \"POSIX\"); -- ERROR\n> CREATE COLLATION coll_dup_chk (LC_CTYPE = \"NONSENSE\", LC_CTYPE = \"POSIX\", LC_COLLATE = \"POSIX\",); -- OK but it's weird\n> CREATE COLLATION coll_dup_chk (PROVIDER = icu, PROVIDER = NONSENSE, LC_COLLATE = \"POSIX\", LC_CTYPE = \"POSIX\"); -- ERROR\n> CREATE COLLATION coll_dup_chk (PROVIDER = NONSENSE, PROVIDER = icu, LC_COLLATE = \"POSIX\", LC_CTYPE = \"POSIX\"); -- OK but it's weird\n> CREATE COLLATION case_sensitive (LOCALE = '', LOCALE = \"NONSENSE\"); -- ERROR\n> CREATE COLLATION coll_dup_chk (LOCALE = \"NONSENSE\", LOCALE = ''); -- OK but it's weird\n> CREATE COLLATION coll_dup_chk (DETERMINISTIC = TRUE, DETERMINISTIC = NONSENSE, LOCALE = ''); -- ERROR\n> CREATE COLLATION coll_dup_chk (DETERMINISTIC = NONSENSE, DETERMINISTIC = TRUE, LOCALE = ''); -- OK but it's weird\n>\n> [3] https://www.postgresql.org/message-id/CALj2ACUa%3DZM8QtOLPCHc7%3DWgFrx9P6-AgKQs8cmKLvNCvu7arQ%40mail.gmail.com\n\n+1 for fixing this issue, we have handled this error in other places.\nThe patch does not apply on head, could you rebase the patch on head\nand post a new patch.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 26 May 2021 19:17:56 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATE COLLATION - check for duplicate options and error out if\n found one" }, { "msg_contents": "On Wed, May 26, 2021 at 7:18 PM vignesh C <vignesh21@gmail.com> wrote:\n> +1 for fixing this issue, we have handled this error in other places.\n> The patch does not apply on head, could you rebase the patch on head\n> and post a new patch.\n\nThanks. I thought of rebasing once the other patch (which reorganizes\n\"...specified more than once\" error) gets committed. Anyways, I've\nrebased for now on the latest master. Please review v2 patch.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 26 May 2021 19:44:47 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: CREATE COLLATION - check for duplicate options and error out if\n found one" }, { "msg_contents": "On Wed, May 26, 2021 at 7:44 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, May 26, 2021 at 7:18 PM vignesh C <vignesh21@gmail.com> wrote:\n> > +1 for fixing this issue, we have handled this error in other places.\n> > The patch does not apply on head, could you rebase the patch on head\n> > and post a new patch.\n>\n> Thanks. I thought of rebasing once the other patch (which reorganizes\n> \"...specified more than once\" error) gets committed. Anyways, I've\n> rebased for now on the latest master. Please review v2 patch.\n\nThe test changes look good to me, I liked the idea of rebasing the\nsource changes once \"specified more than once\" patch gets committed. I\nwill review the code changes after that.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 27 May 2021 20:36:30 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATE COLLATION - check for duplicate options and error out if\n found one" }, { "msg_contents": "On Thu, May 27, 2021 at 8:36 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, May 26, 2021 at 7:44 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Wed, May 26, 2021 at 7:18 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > +1 for fixing this issue, we have handled this error in other places.\n> > > The patch does not apply on head, could you rebase the patch on head\n> > > and post a new patch.\n> >\n> > Thanks. I thought of rebasing once the other patch (which reorganizes\n> > \"...specified more than once\" error) gets committed. Anyways, I've\n> > rebased for now on the latest master. Please review v2 patch.\n>\n> The test changes look good to me, I liked the idea of rebasing the\n> source changes once \"specified more than once\" patch gets committed. I\n> will review the code changes after that.\n\nI'm not sure which patch goes first. I think the review can be\nfinished and see which one will be picked up first by the committer.\nBased on that, the other patch can be rebased.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 27 May 2021 22:33:39 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: CREATE COLLATION - check for duplicate options and error out if\n found one" }, { "msg_contents": "On Wed, May 26, 2021 at 7:44 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, May 26, 2021 at 7:18 PM vignesh C <vignesh21@gmail.com> wrote:\n> > +1 for fixing this issue, we have handled this error in other places.\n> > The patch does not apply on head, could you rebase the patch on head\n> > and post a new patch.\n>\n> Thanks. I thought of rebasing once the other patch (which reorganizes\n> \"...specified more than once\" error) gets committed. Anyways, I've\n> rebased for now on the latest master. Please review v2 patch.\n>\n\nThanks for the updated patch.\nOne minor comment:\nYou can remove the brackets around errcode, You could change:\n+ if (localeEl)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"option \\\"%s\\\" specified more than once\", defel->defname),\n+ parser_errposition(pstate, defel->location)));\nto:\n+ if (localeEl)\n+ ereport(ERROR,\n+ errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"option \\\"%s\\\" specified more than once\", defel->defname),\n+ parser_errposition(pstate, defel->location));\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sat, 29 May 2021 21:08:30 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATE COLLATION - check for duplicate options and error out if\n found one" }, { "msg_contents": "On Sat, May 29, 2021 at 9:08 PM vignesh C <vignesh21@gmail.com> wrote:\n> One minor comment:\n> You can remove the brackets around errcode, You could change:\n> + if (localeEl)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_SYNTAX_ERROR),\n> + errmsg(\"option \\\"%s\\\" specified more than once\", defel->defname),\n> + parser_errposition(pstate, defel->location)));\n> to:\n> + if (localeEl)\n> + ereport(ERROR,\n> + errcode(ERRCODE_SYNTAX_ERROR),\n> + errmsg(\"option \\\"%s\\\" specified more than once\", defel->defname),\n> + parser_errposition(pstate, defel->location));\n\nThanks. PSA v3 patch.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 29 May 2021 21:19:59 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: CREATE COLLATION - check for duplicate options and error out if\n found one" }, { "msg_contents": "On Sat, May 29, 2021 at 9:20 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Sat, May 29, 2021 at 9:08 PM vignesh C <vignesh21@gmail.com> wrote:\n> > One minor comment:\n> > You can remove the brackets around errcode, You could change:\n> > + if (localeEl)\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_SYNTAX_ERROR),\n> > + errmsg(\"option \\\"%s\\\" specified more than once\", defel->defname),\n> > + parser_errposition(pstate, defel->location)));\n> > to:\n> > + if (localeEl)\n> > + ereport(ERROR,\n> > + errcode(ERRCODE_SYNTAX_ERROR),\n> > + errmsg(\"option \\\"%s\\\" specified more than once\", defel->defname),\n> > + parser_errposition(pstate, defel->location));\n>\n> Thanks. PSA v3 patch.\n\nThanks for the updated patch, the changes look good to me.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 31 May 2021 19:40:19 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATE COLLATION - check for duplicate options and error out if\n found one" }, { "msg_contents": "On Mon, 31 May 2021 at 15:10, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Sat, May 29, 2021 at 9:20 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Thanks. PSA v3 patch.\n>\n> Thanks for the updated patch, the changes look good to me.\n>\n\nHi,\n\nHaving pushed [1], I started looking at this, and I think it's mostly\nin good shape.\n\nSince we're improving the CREATE COLLATION errors, I think it's also\nworth splitting out the error for LOCALE + LC_COLLATE/LC_CTYPE from\nthe error for FROM + any other option.\n\nIn the case of LOCALE + LC_COLLATE/LC_CTYPE, there is an identical\nerror in CREATE DATABASE, so we should use the same error message and\ndetail text here.\n\nThen logically, FROM + any other option should have an error of the\nsame general form.\n\nAnd finally, it then makes sense to make the other errors follow the\nsame pattern (with the \"specified more than once\" text in the detail),\nwhich is also where we ended up in the discussion over in [1].\n\nSo, attached is what I propose.\n\nRegards,\nDean\n\n[1] https://www.postgresql.org/message-id/CAEZATCXHWa9OoSAetiZiGQy1eM2raa9q-b3K4ZYDwtcARypCcA%40mail.gmail.com", "msg_date": "Thu, 15 Jul 2021 20:34:38 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATE COLLATION - check for duplicate options and error out if\n found one" }, { "msg_contents": "On Fri, Jul 16, 2021 at 1:04 AM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> Having pushed [1], I started looking at this, and I think it's mostly\n> in good shape.\n\nThanks a lot for taking a look at this.\n\n> Since we're improving the CREATE COLLATION errors, I think it's also\n> worth splitting out the error for LOCALE + LC_COLLATE/LC_CTYPE from\n> the error for FROM + any other option.\n>\n> In the case of LOCALE + LC_COLLATE/LC_CTYPE, there is an identical\n> error in CREATE DATABASE, so we should use the same error message and\n> detail text here.\n>\n> Then logically, FROM + any other option should have an error of the\n> same general form.\n>\n> And finally, it then makes sense to make the other errors follow the\n> same pattern (with the \"specified more than once\" text in the detail),\n> which is also where we ended up in the discussion over in [1].\n>\n> So, attached is what I propose.\n\nHere are some comments:\n\n1) Duplicate option check for \"FROM\" option is unnecessary and will be\na dead code. Because the syntaxer anyways would catch if FROM is\nspecified more than once, something like CREATE COLLATION mycoll1 FROM\nFROM \"C\";.\n+ {\n+ if (fromEl)\n+ errorDuplicateDefElem(defel, pstate);\n defelp = &fromEl;\n\nAnd we might think to catch below errors:\n\npostgres=# CREATE COLLATION coll_dup_chk (FROM = \"C\", FROM = \"C\",\nVERSION = \"1\");\nERROR: conflicting or redundant options\nLINE 1: CREATE COLLATION coll_dup_chk (FROM = \"C\", FROM = \"C\", VERSI...\n ^\nDETAIL: Option \"from\" specified more than once.\n\nBut IMO, the above should fail with:\n\npostgres=# CREATE COLLATION coll_dup_chk (FROM = \"C\", FROM = \"C\",\nVERSION = \"1\");\nERROR: conflicting or redundant options\nDETAIL: FROM cannot be specified together with any other options.\n\n2) I don't understand the difference between errorConflictingDefElem\nand errorDuplicateDefElem. Isn't the following statement \"This should\nonly be used if defel->defname is guaranteed to match the user-entered\noption name\"\ntrue with errorConflictingDefElem? I mean, aren't we calling\nerrorConflictingDefElem only if the defel->defname is guaranteed to\nmatch the user-entered option name? I don't see much use of\nerrdetail(\"Option \\\"%s\\\" specified more than once.\", defel->defname),\nin errorDuplicateDefElem when we have pstate and that sort of points\nto the option that's specified more than once.\n+\n+/*\n+ * Raise an error about a duplicate DefElem.\n+ *\n+ * This is similar to errorConflictingDefElem(), except that it is intended for\n+ * an option that the user explicitly specified more than once. This should\n+ * only be used if defel->defname is guaranteed to match the user-entered\n+ * option name, otherwise the detail text might be confusing.\n+ */\n\nI personally don't like the new function errorDuplicateDefElem as it\ndoesn't add any value given the presence of pstate.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 16 Jul 2021 11:10:14 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: CREATE COLLATION - check for duplicate options and error out if\n found one" }, { "msg_contents": "On Fri, 16 Jul 2021 at 06:40, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> 1) Duplicate option check for \"FROM\" option is unnecessary and will be\n> a dead code. Because the syntaxer anyways would catch if FROM is\n> specified more than once, something like CREATE COLLATION mycoll1 FROM\n> FROM \"C\";.\n\nHmm, it is possible to type CREATE COLLATION mycoll1 (FROM = \"C\", FROM\n= \"POSIX\") though. It will still be caught by the check at the bottom\nthough, so I guess it's OK to skip the duplicate check in that case.\nAlso, it's not a documented syntax, so it's unlikely to occur in\npractice anyway.\n\n> 2) I don't understand the difference between errorConflictingDefElem\n> and errorDuplicateDefElem.\n>\n> I personally don't like the new function errorDuplicateDefElem as it\n> doesn't add any value given the presence of pstate.\n\nYeah, there was a lot of discussion on that other thread about using\ndefel->defname in these kinds of errors, and that was still on my\nmind. In general there are a number of cases where defel->defname\nisn't quite right, because it doesn't match the user-entered text,\nthough in this case it always does. You're right though, it's a bit\nredundant with the position indicator pointing to the offending\noption, so it's probably not worth the effort to include it here when\nwe don't anywhere else. That makes the patch much simpler, and\nconsistent with option-checking elsewhere -- v5 attached (which is now\nmuch more like your v3).\n\nRegards,\nDean", "msg_date": "Fri, 16 Jul 2021 09:01:55 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATE COLLATION - check for duplicate options and error out if\n found one" }, { "msg_contents": "On Fri, Jul 16, 2021 at 1:32 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Fri, 16 Jul 2021 at 06:40, Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > 1) Duplicate option check for \"FROM\" option is unnecessary and will be\n> > a dead code. Because the syntaxer anyways would catch if FROM is\n> > specified more than once, something like CREATE COLLATION mycoll1 FROM\n> > FROM \"C\";.\n>\n> Hmm, it is possible to type CREATE COLLATION mycoll1 (FROM = \"C\", FROM\n> = \"POSIX\") though. It will still be caught by the check at the bottom\n> though, so I guess it's OK to skip the duplicate check in that case.\n> Also, it's not a documented syntax, so it's unlikely to occur in\n> practice anyway.\n>\n> > 2) I don't understand the difference between errorConflictingDefElem\n> > and errorDuplicateDefElem.\n> >\n> > I personally don't like the new function errorDuplicateDefElem as it\n> > doesn't add any value given the presence of pstate.\n>\n> Yeah, there was a lot of discussion on that other thread about using\n> defel->defname in these kinds of errors, and that was still on my\n> mind. In general there are a number of cases where defel->defname\n> isn't quite right, because it doesn't match the user-entered text,\n> though in this case it always does. You're right though, it's a bit\n> redundant with the position indicator pointing to the offending\n> option, so it's probably not worth the effort to include it here when\n> we don't anywhere else. That makes the patch much simpler, and\n> consistent with option-checking elsewhere -- v5 attached (which is now\n> much more like your v3).\n\nThanks. The v5 patch LGTM.\n\nComment on errorConflictingDefElem:\nI think that the message in errorConflictingDefElem should say\n<<option \\\"%s\\'' specified more than once>>. I'm not sure why it\nwasn't done. Almost, all the cases where errorConflictingDefElem is\ncalled from, def->defname would give the correct user specified option\nname right, as errorConflictingDefElem called in if\n(strcmp(def->defname, \"foo\") == 0) clause.\n\nIs there any location the function errorConflictingDefElem gets called\nwhen def->defname isn't a name that's specified by the user? Please\npoint me to that location. If there's a scenario, then the function\ncan be something like below:\nvoid\nerrorConflictingDefElem(DefElem *defel, ParseState *pstate, bool\nshow_option_name)\n{\n if (show_option_name)\n ereport(ERROR,\n errcode(ERRCODE_SYNTAX_ERROR),\n errmsg(\"option \\\"%s\\\" specified more than once\",\ndefel->defname),\n pstate ? parser_errposition(pstate, defel->location) : 0);\n else\n ereport(ERROR,\n errcode(ERRCODE_SYNTAX_ERROR),\n errmsg(\"conflicting or redundant options\"),\n pstate ? parser_errposition(pstate, defel->location) : 0);\n}\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 16 Jul 2021 14:56:43 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: CREATE COLLATION - check for duplicate options and error out if\n found one" }, { "msg_contents": "On Fri, 16 Jul 2021 at 10:26, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Thanks. The v5 patch LGTM.\n\nOK, I'll push that in a while.\n\n> Comment on errorConflictingDefElem:\n> I think that the message in errorConflictingDefElem should say\n> <<option \\\"%s\\'' specified more than once>>. I'm not sure why it\n> wasn't done. Almost, all the cases where errorConflictingDefElem is\n> called from, def->defname would give the correct user specified option\n> name right, as errorConflictingDefElem called in if\n> (strcmp(def->defname, \"foo\") == 0) clause.\n>\n> Is there any location the function errorConflictingDefElem gets called\n> when def->defname isn't a name that's specified by the user?\n\nThere are a few cases where def->defname isn't necessarily the name\nthat was specified by the user (e.g., \"volatility\", \"strict\",\n\"format\", and probably more cases not spotted in the other thread,\nwhich was only a cursory review), and it would be quite onerous to go\nthrough every one of the 100+ places in the code where this error is\nraised to check them all. 2bfb50b3df was more about making pstate\navailable in more places to add location information to existing\nerrors, and did not want the risk of changing and possibly worsening\nexisting errors.\n\n> If there's a scenario, then the function\n> can be something like below:\n> void\n> errorConflictingDefElem(DefElem *defel, ParseState *pstate, bool\n> show_option_name)\n> {\n> if (show_option_name)\n> ereport(ERROR,\n> errcode(ERRCODE_SYNTAX_ERROR),\n> errmsg(\"option \\\"%s\\\" specified more than once\",\n> defel->defname),\n> pstate ? parser_errposition(pstate, defel->location) : 0);\n> else\n> ereport(ERROR,\n> errcode(ERRCODE_SYNTAX_ERROR),\n> errmsg(\"conflicting or redundant options\"),\n> pstate ? parser_errposition(pstate, defel->location) : 0);\n> }\n\nI think it's preferable to have a single consistent primary error\nmessage for all these cases. I wouldn't really want \"CREATE FUNCTION\n... STRICT STRICT\" to throw a different error from \"CREATE FUNCTION\n... LEAKPROOF LEAKPROOF\", but saying \"option \\\"strict\\\" specified more\nthan once\" would be odd for \"CREATE FUNCTION ... CALLED ON NULL INPUT\nRETURNS NULL ON NULL INPUT\", which is indistinguishable from \"STRICT\nSTRICT\" in the code.\n\nIn any case, as you said before, the location is sufficient to\nidentify the offending option. Making this kind of change would\nrequire going through all 100+ callers quite carefully, and a lot more\ntesting. I'm not really convinced that it's worth it.\n\nRegards,\nDean\n\n\n", "msg_date": "Fri, 16 Jul 2021 12:17:23 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATE COLLATION - check for duplicate options and error out if\n found one" }, { "msg_contents": "On Fri, Jul 16, 2021 at 4:47 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Fri, 16 Jul 2021 at 10:26, Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Thanks. The v5 patch LGTM.\n>\n> OK, I'll push that in a while.\n\nThanks.\n\n> There are a few cases where def->defname isn't necessarily the name\n> that was specified by the user (e.g., \"volatility\", \"strict\",\n> \"format\", and probably more cases not spotted in the other thread,\n> which was only a cursory review), and it would be quite onerous to go\n> through every one of the 100+ places in the code where this error is\n> raised to check them all. 2bfb50b3df was more about making pstate\n> available in more places to add location information to existing\n> errors, and did not want the risk of changing and possibly worsening\n> existing errors.\n>\n> I think it's preferable to have a single consistent primary error\n> message for all these cases. I wouldn't really want \"CREATE FUNCTION\n> ... STRICT STRICT\" to throw a different error from \"CREATE FUNCTION\n> ... LEAKPROOF LEAKPROOF\", but saying \"option \\\"strict\\\" specified more\n> than once\" would be odd for \"CREATE FUNCTION ... CALLED ON NULL INPUT\n> RETURNS NULL ON NULL INPUT\", which is indistinguishable from \"STRICT\n> STRICT\" in the code.\n>\n> In any case, as you said before, the location is sufficient to\n> identify the offending option. Making this kind of change would\n> require going through all 100+ callers quite carefully, and a lot more\n> testing. I'm not really convinced that it's worth it.\n\nThanks for the examples. I get it. It doesn't make sense to show\n\"option \"foo\" specified more than once\" for the cases where \"foo\" is\nnot necessarily an option that's specified in a WITH clause of a\nstatement, but it can be something like CREATE FUNCTION ... foo foo,\nlike you quoted above.\n\nI also think that if it is specified as CREATE FUNCTION ... STRICT\nSTRICT, CREATE FUNCTION ... CALLED ON NULL INPUT RETURNS NULL ON NULL\nINPUT etc. isn't the syntaxer catching that error while parsing the\nSQL text, similar to CREATE COLLATION mycoll1 FROM FROM \"C\";? If we\ndon't want to go dig why the syntaxer isn't catching such errors, I\ntend to agree to keep the errorConflictingDefElem as is given the\neffort that one needs to put in identifying, fixing and testing the\nchanges.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 17 Jul 2021 09:54:07 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: CREATE COLLATION - check for duplicate options and error out if\n found one" }, { "msg_contents": "On Fri, 16 Jul 2021 at 12:17, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Fri, 16 Jul 2021 at 10:26, Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Thanks. The v5 patch LGTM.\n>\n> OK, I'll push that in a while.\n>\n\nPushed, with some additional tidying up.\n\nIn particular, I decided it was neater to follow the style of the code\nin typecmds.c, and just do a single check for duplicates at the end of\nthe loop, since that makes for a significantly smaller patch, with\nless code duplication. That, of course, means duplicate \"from\" options\nare handled the same as any other option, but that's arguably more\nconsistent, and not particularly important anyway, since it's not a\ndocumented syntax.\n\nRegards,\nDean\n\n\n", "msg_date": "Sun, 18 Jul 2021 11:17:55 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATE COLLATION - check for duplicate options and error out if\n found one" }, { "msg_contents": "On Sat, 17 Jul 2021 at 05:24, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> I also think that if it is specified as CREATE FUNCTION ... STRICT\n> STRICT, CREATE FUNCTION ... CALLED ON NULL INPUT RETURNS NULL ON NULL\n> INPUT etc. isn't the syntaxer catching that error while parsing the\n> SQL text, similar to CREATE COLLATION mycoll1 FROM FROM \"C\";?\n\nNo, they're processed quite differently. The initial parsing of CREATE\nFUNCTION allows an arbitrary list of things like STRICT, CALLED ON\nNULL INPUT, etc., which it turns into a list of DefElem that is only\nchecked later on. That's the most natural way to do it, since this is\nreally just a list of options that can appear in any order, much like\nthe version of CREATE COLLATION that allows options in parentheses,\nwhich is quite different from the version that takes a single FROM.\nReading the relevant portions of gram.y is probably the easiest way to\nunderstand it.\n\nIt's actually quite instructive to search for \"makeDefElem\" in gram.y,\nand see all the places that create a DefElem that doesn't match the\nuser-entered syntax. There are quite a few of them, and there may be\nothers elsewhere.\n\nRegards,\nDean\n\n\n", "msg_date": "Sun, 18 Jul 2021 11:20:03 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATE COLLATION - check for duplicate options and error out if\n found one" } ]
[ { "msg_contents": "Hi,\n\nIf planned parallel workers do not get launched, the Result Cache plan\nnode shows all-0 stats for each of those workers:\n\ntpch=# set max_parallel_workers TO 0;\nSET\ntpch=# explain analyze\nselect avg(l_discount) from orders, lineitem\nwhere\n l_orderkey = o_orderkey\n and o_orderdate < date '1995-03-09'\n and l_shipdate > date '1995-03-09';\n\n\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=315012.87..315012.88 rows=1 width=32)\n(actual time=27533.482..27533.598 rows=1 loops=1)\n -> Gather (cost=315012.44..315012.85 rows=4 width=32) (actual\ntime=27533.471..27533.587 rows=1 loops=1)\n Workers Planned: 4\n Workers Launched: 0\n -> Partial Aggregate (cost=314012.44..314012.45 rows=1\nwidth=32) (actual time=27533.177..27533.178 rows=1 loops=1)\n -> Nested Loop (cost=0.44..309046.68 rows=1986303\nwidth=4) (actual time=0.400..27390.835 rows=748912 loops=1)\n -> Parallel Seq Scan on lineitem\n(cost=0.00..154513.66 rows=4120499 width=12) (actual\ntime=0.044..7910.399 rows=16243662 loops=1)\n Filter: (l_shipdate > '1995-03-09'::date)\n Rows Removed by Filter: 13756133\n -> Result Cache (cost=0.44..0.53 rows=1\nwidth=4) (actual time=0.001..0.001 rows=0 loops=16243662)\n Cache Key: lineitem.l_orderkey\n Hits: 12085749 Misses: 4157913 Evictions:\n3256424 Overflows: 0 Memory Usage: 65537kB\n Worker 0: Hits: 0 Misses: 0 Evictions: 0\n Overflows: 0 Memory Usage: 0kB\n Worker 1: Hits: 0 Misses: 0 Evictions: 0\n Overflows: 0 Memory Usage: 0kB\n Worker 2: Hits: 0 Misses: 0 Evictions: 0\n Overflows: 0 Memory Usage: 0kB\n Worker 3: Hits: 0 Misses: 0 Evictions: 0\n Overflows: 0 Memory Usage: 0kB\n -> Index Scan using orders_pkey on orders\n(cost=0.43..0.52 rows=1 width=4) (actual time=0.002..0.002 rows=0\nloops=4157913)\n Index Cond: (o_orderkey = lineitem.l_orderkey)\n Filter: (o_orderdate < '1995-03-09'::date)\n Rows Removed by Filter: 1\n Planning Time: 0.211 ms\n Execution Time: 27553.477 ms\n(22 rows)\n\nBy looking at the other cases like show_sort_info() or printing\nper-worker jit info, I could see that the general policy is that we\nskip printing info for workers that are not launched. Attached is a\npatch to do the same for Result Cache.\n\nI was earlier thinking about using (instrument[n].nloops == 0) to\ncheck for not-launched workers. But we are already using \"if\n(rcstate->stats.cache_misses == 0)\" for the leader process, so for\nconsistency I used the same method for workers.\n\n-- \nThanks,\n-Amit Khandekar\nHuawei Technologies", "msg_date": "Tue, 27 Apr 2021 18:08:43 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Result Cache node shows per-worker info even for workers not launched" }, { "msg_contents": "On Wed, 28 Apr 2021 at 00:39, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> If planned parallel workers do not get launched, the Result Cache plan\n> node shows all-0 stats for each of those workers:\n\nThanks for reporting this and for the patch.\n\nYou're right that there is a problem here. I did in fact have code to\nskip workers that didn't have any cache misses right up until v18 of\nthe patch [1], but I removed it because I was getting some test\ninstability in the partition_prune regression tests. That was\nhighlighted by the CFbot machines. I mentioned about that in the final\nparagraph of [2]. I didn't mention the exact test there, but I was\ntalking about the test in partition_prune.sql.\n\nBy the time it came to b6002a796, I did end up changing the\npartition_prune tests. However, I had to back that version out again\nbecause of some problems with force_parallel_mode = regress buildfarm\nanimals. By the time I re-committed Result Cache in 9eacee2e6, I had\nchanged the partition_prune tests so they did SET enable_resultcache =\n0 before running that parallel test. I'd basically decided that the\ntest was never going to be stable on the buildfarm.\n\nThe problem there was that the results would vary depending on if the\nparallel worker managed to do anything before the main process did all\nthe work. Because the tests are pretty small scale, many machines\nwouldn't manage to get their worker(s) in gear and running before the\nmain process had finished the test. This was the reason I removed the\ncache_misses == 0 test in explain.c. I'd thought that I could make\nthat test stable by just always outputting the cache stats line from\nworkers, even if they didn't assist during execution.\n\nSo, given that I removed the parallel test in partition_prune.sql, and\ndon't have any EXPLAIN ANALYZE output for parallel tests in\nresultcache.sql, it should be safe enough to put that cache_misses ==\n0 test back into explain.c\n\nI've attached a patch to do this. The explain.c part is pretty similar\nto your patch, I just took my original code and comment.\n\nThe patch also removes the SET force_parallel_mode = off in\nresultcache.sql. That's no longer needed due to adding this check in\nexplain.c again. I also removed the changes I made to the\nexplain_parallel_append function in partition_prune.sql. I shouldn't\nhave included those in 9eacee2e6.\n\nI plan to push this in the next 24 hours or so.\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvoOmTtNPoF-+Q1dAOMa8vWFsFbyQb_A0iUKTS5nf2DuLw@mail.gmail.com\n[2] https://postgr.es/m/CAApHDvrz4f+i1wu-8hyqJ=pxYDroGA5Okgo5rWPOj47RZ6QTmQ@mail.gmail.com", "msg_date": "Wed, 28 Apr 2021 20:24:17 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Result Cache node shows per-worker info even for workers not\n launched" }, { "msg_contents": "On Wed, Apr 28, 2021 at 1:54 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> I plan to push this in the next 24 hours or so.\n\nI happen to see explain_resultcache in resultcache.sql, seems like two\nof the tests still have numbers for cache hits and misses - Hits: 980\nMisses: 20, won't these make tests unstable? Will these numbers be\nsame across machines? Or is it that no buildfarm had caught these? The\ncomment below says that, the hits and misses are not same across\nmachines:\n-- Ensure we get some evictions. We're unable to validate the hits and misses\n-- here as the number of entries that fit in the cache at once will vary\n-- between different machines.\n\nShould we remove the hide_hitmiss parameter in explain_resultcache and\nalways print N for non-zero and Zero for 0 hits and misses? This\nclearly shows that we have 0 or non-zero hits or misses.\n\nAm I missing something?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 28 Apr 2021 15:08:36 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Result Cache node shows per-worker info even for workers not\n launched" }, { "msg_contents": "On Wed, 28 Apr 2021 at 15:08, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Apr 28, 2021 at 1:54 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > I plan to push this in the next 24 hours or so.\n>\n> I happen to see explain_resultcache in resultcache.sql, seems like two\n> of the tests still have numbers for cache hits and misses - Hits: 980\n> Misses: 20, won't these make tests unstable? Will these numbers be\n> same across machines? Or is it that no buildfarm had caught these? The\n> comment below says that, the hits and misses are not same across\n> machines:\n> -- Ensure we get some evictions. We're unable to validate the hits and misses\n> -- here as the number of entries that fit in the cache at once will vary\n> -- between different machines.\n>\n> Should we remove the hide_hitmiss parameter in explain_resultcache and\n> always print N for non-zero and Zero for 0 hits and misses? This\n> clearly shows that we have 0 or non-zero hits or misses.\n>\n> Am I missing something?\n\nI believe, the assumption here is that with no workers involved, it is\nguaranteed to have the exact same cache misses and hits anywhere.\n\n\n", "msg_date": "Wed, 28 Apr 2021 16:11:58 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Result Cache node shows per-worker info even for workers not\n launched" }, { "msg_contents": "On Wed, 28 Apr 2021 at 21:38, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Apr 28, 2021 at 1:54 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > I plan to push this in the next 24 hours or so.\n>\n> I happen to see explain_resultcache in resultcache.sql, seems like two\n> of the tests still have numbers for cache hits and misses - Hits: 980\n> Misses: 20, won't these make tests unstable? Will these numbers be\n> same across machines? Or is it that no buildfarm had caught these? The\n> comment below says that, the hits and misses are not same across\n> machines:\n> -- Ensure we get some evictions. We're unable to validate the hits and misses\n> -- here as the number of entries that fit in the cache at once will vary\n> -- between different machines.\n\nThe only reason it would be unstable is if there are cache evictions.\nEvictions will only happen if the cache fills up and we need to make\nway for new entries. A 32-bit machine, for example, will use slightly\nless memory for caching items, so the number of evictions is going to\nbe a bit less on those machine. Having an unstable number of\nevictions will cause the hits and misses to be unstable too.\nOtherwise, the number of misses is predictable, it'll be the number of\ndistinct sets of parameters that we lookup in the cache. Any repeats\nwill be a hit. So hits plus misses should just add up to the number\nof times that a normal parameterized nested loop would execute the\ninner side, and that's predictable too. It would only change if you\nchange the query or the data in the table.\n\n> Should we remove the hide_hitmiss parameter in explain_resultcache and\n> always print N for non-zero and Zero for 0 hits and misses? This\n> clearly shows that we have 0 or non-zero hits or misses.\n\nI added that because if there are no evictions then the hits and\nmisses should be perfectly stable, providing the test is small enough\nnot to exceed work_mem and fill the cache. If someone was to run the\ntests with a small work_mem, then there would be no shortage of other\ntests that would fail due to plan changes. These tests were designed\nto be small enough so there's no danger of getting close to work_mem\nand filling the cache.\n\nHowever, I did add 1 test that sets work_mem down to 64kB to ensure\nthe eviction code does get some exercise. You'll notice that I pass\n\"true\" to explain_resultcache() to hide the hits and misses there. We\ncan't test the exact number of hits/misses/evictions there, but we can\nat least tell apart the zero and non-zero by how I coded\nexplain_resultcache() to replace with Zero or N depending on if the\nnumber was zero or above zero.\n\nDavid\n\n\n", "msg_date": "Wed, 28 Apr 2021 22:43:44 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Result Cache node shows per-worker info even for workers not\n launched" }, { "msg_contents": "On Wed, 28 Apr 2021 at 16:14, David Rowley <dgrowleyml@gmail.com> wrote:\n> However, I did add 1 test that sets work_mem down to 64kB to ensure\n> the eviction code does get some exercise. You'll notice that I pass\n> \"true\" to explain_resultcache() to hide the hits and misses there. We\n> can't test the exact number of hits/misses/evictions there, but we can\n> at least tell apart the zero and non-zero by how I coded\n> explain_resultcache() to replace with Zero or N depending on if the\n> number was zero or above zero.\n\nThanks for the explanation. I did realize after replying to Bharat\nupthread, that I was wrong in assuming that the cache misses and cache\nhits are always stable for non-parallel scans.\n\n\n", "msg_date": "Wed, 28 Apr 2021 16:29:09 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Result Cache node shows per-worker info even for workers not\n launched" }, { "msg_contents": "On Wed, 28 Apr 2021 at 13:54, David Rowley <dgrowleyml@gmail.com> wrote:\n\n> So, given that I removed the parallel test in partition_prune.sql, and\n> don't have any EXPLAIN ANALYZE output for parallel tests in\n> resultcache.sql, it should be safe enough to put that cache_misses ==\n> 0 test back into explain.c\n>\n> I've attached a patch to do this. The explain.c part is pretty similar\n> to your patch, I just took my original code and comment.\n\nSounds good. And thanks for the cleanup patch, and the brief history.\nPatch looks ok to me.\n\n\n", "msg_date": "Wed, 28 Apr 2021 16:35:16 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Result Cache node shows per-worker info even for workers not\n launched" }, { "msg_contents": "On Wed, 28 Apr 2021 at 23:05, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>\n> On Wed, 28 Apr 2021 at 13:54, David Rowley <dgrowleyml@gmail.com> wrote:\n> > I've attached a patch to do this. The explain.c part is pretty similar\n> > to your patch, I just took my original code and comment.\n>\n> Sounds good. And thanks for the cleanup patch, and the brief history.\n> Patch looks ok to me.\n\nThanks for the review. I pushed the patch with a small additional\nchange to further tidy up show_resultcache_info().\n\nDavid\n\n\n", "msg_date": "Fri, 30 Apr 2021 14:50:33 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Result Cache node shows per-worker info even for workers not\n launched" } ]
[ { "msg_contents": "Hi.\n\nThis is a proposal for a new feature in pg_stat_statements extension.\nI think we need to add some statistics to pg_stat_statements_info view.\n\n\"pg_stat_statements_info.stats_reset\" will only be logged if \n\"pg_statements_reset()\" or \"pg_statements_reset(0,0,0)\" is executed.\nHow about changing it to the following ?\n\n[before]\n-[ RECORD 1 ]------------------------------\ndealloc | 0\nstats_reset | 2021-04-27 21:30:00\n\n[after]\n-[ RECORD 1 ]------------------------------------------\ndealloc | 0\nlast_reset_all_time | 2021-04-27 21:30:00\nlast_reset_userid | 10\nlast_reset_userid_time | 2021-04-27 22:30:00\nlast_reset_dbid | 13974\nlast_reset_dbid_time | 2021-04-27 23:30:00\nlast_reset_queryid | 8436481539005031698\nlast_reset_queryid_time | 2021-04-27 23:30:00\n\nIf \"pg_statements_reset(10,0,0)\" is executed, then \"last_reset_userid\", \n\"last_reset_userid\" are updated, but do not update \n\"last_reset_all_time\".\nIf \"pg_statements_reset(0,0,0)\" is executed, then \"last_reset_userid\", \n\"last_reset_userid\" and others are null.\n\nWhat do you think?\n\nRegards,\nSeino Yuki\n\n\n", "msg_date": "Tue, 27 Apr 2021 22:13:19 +0900", "msg_from": "Seino Yuki <seinoyu@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Add reset information to pg_stat_statements_info" } ]
[ { "msg_contents": "Hi,\n\nThe TRUNCATE command currently skips processing repeated relations\n(see if (list_member_oid(relids, myrelid)) continue; in\nExecuteTruncate) because the same relation can't be truncated more\nthan once as it will be under \"use\" during the txn. For instance, in\nthe following use cases 1) TRUNCATE foo, foo; 2) TRUNCATE foo, ONLY\nfoo, foo; first instance of relation foo is taken into consideration\nfor processing and other relation instances ( and options specified if\nany) are ignored.\n\nI feel that users should be aware of this behaviour so that they can\ncorrect the commands if written in such a way and don't report\nunexpected behaviour especially for the use cases like (2) where they\nmight expect ONLY foo behaviour but it is skipped by the server.\nAFAICS, I don't find it anywhere in the docs, should we document it as\na note?\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Apr 2021 19:37:27 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Should we document the behaviour of TRUNCATE skipping repeated\n relations?" } ]
[ { "msg_contents": "The following documentation comment has been logged on the website:\n\nPage: https://www.postgresql.org/docs/13/sql-createtrigger.html\nDescription:\n\nhttps://www.postgresql.org/docs/current/sql-createtrigger.html mentions the\nword \"transaction\" only once, in reference specifically to constraint\ntriggers: \"They can be fired either at the end of the statement causing the\ntriggering event, or at the end of the containing transaction; in the latter\ncase they are said to be deferred.\"\r\n\r\nIf I understand correctly, it would be helpful to add this sentence or a\ncorrected version of it: \"Triggers always execute in the same transaction as\nthe triggering event, and if a trigger fails, the transaction is rolled\nback.\"", "msg_date": "Tue, 27 Apr 2021 14:26:48 +0000", "msg_from": "PG Doc comments form <noreply@postgresql.org>", "msg_from_op": true, "msg_subject": "Clarify how triggers relate to transactions" }, { "msg_contents": "On Tue, 2021-04-27 at 14:26 +0000, PG Doc comments form wrote:\n> The following documentation comment has been logged on the website:\n> \n> Page: https://www.postgresql.org/docs/13/sql-createtrigger.html\n> Description:\n> \n> https://www.postgresql.org/docs/current/sql-createtrigger.html mentions the\n> word \"transaction\" only once, in reference specifically to constraint\n> triggers: \"They can be fired either at the end of the statement causing the\n> triggering event, or at the end of the containing transaction; in the latter\n> case they are said to be deferred.\"\n> \n> If I understand correctly, it would be helpful to add this sentence or a\n> corrected version of it: \"Triggers always execute in the same transaction as\n> the triggering event, and if a trigger fails, the transaction is rolled\n> back.\"\n\nGood idea in principle, but I'd put that information on\nhttps://www.postgresql.org/docs/current/trigger-definition.html\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 28 Apr 2021 13:24:53 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Clarify how triggers relate to transactions" }, { "msg_contents": "Gotcha. Where would I go to make the PR?\n\nOn Wed, Apr 28, 2021, 7:24 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n\n> On Tue, 2021-04-27 at 14:26 +0000, PG Doc comments form wrote:\n> > The following documentation comment has been logged on the website:\n> >\n> > Page: https://www.postgresql.org/docs/13/sql-createtrigger.html\n> > Description:\n> >\n> > https://www.postgresql.org/docs/current/sql-createtrigger.html mentions\n> the\n> > word \"transaction\" only once, in reference specifically to constraint\n> > triggers: \"They can be fired either at the end of the statement causing\n> the\n> > triggering event, or at the end of the containing transaction; in the\n> latter\n> > case they are said to be deferred.\"\n> >\n> > If I understand correctly, it would be helpful to add this sentence or a\n> > corrected version of it: \"Triggers always execute in the same\n> transaction as\n> > the triggering event, and if a trigger fails, the transaction is rolled\n> > back.\"\n>\n> Good idea in principle, but I'd put that information on\n> https://www.postgresql.org/docs/current/trigger-definition.html\n>\n> Yours,\n> Laurenz Albe\n>\n>\n\nGotcha. Where would I go to make the PR?On Wed, Apr 28, 2021, 7:24 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:On Tue, 2021-04-27 at 14:26 +0000, PG Doc comments form wrote:\n> The following documentation comment has been logged on the website:\n> \n> Page: https://www.postgresql.org/docs/13/sql-createtrigger.html\n> Description:\n> \n> https://www.postgresql.org/docs/current/sql-createtrigger.html mentions the\n> word \"transaction\" only once, in reference specifically to constraint\n> triggers: \"They can be fired either at the end of the statement causing the\n> triggering event, or at the end of the containing transaction; in the latter\n> case they are said to be deferred.\"\n> \n> If I understand correctly, it would be helpful to add this sentence or a\n> corrected version of it: \"Triggers always execute in the same transaction as\n> the triggering event, and if a trigger fails, the transaction is rolled\n> back.\"\n\nGood idea in principle, but I'd put that information on\nhttps://www.postgresql.org/docs/current/trigger-definition.html\n\nYours,\nLaurenz Albe", "msg_date": "Wed, 28 Apr 2021 13:18:40 -0400", "msg_from": "Nathan Long <him@nathanmlong.com>", "msg_from_op": false, "msg_subject": "Re: Clarify how triggers relate to transactions" }, { "msg_contents": "On Wed, 2021-04-28 at 13:18 -0400, Nathan Long wrote:\n> Gotcha. Where would I go to make the PR?\n\nYou'd create a patch against Git master and send it to this\nmailing list or pgsql-hackers. If you don't want it to fall\nbetween the cracks, register in the next commitfest where it\ncan undergo peer review.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 28 Apr 2021 20:17:49 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Clarify how triggers relate to transactions" }, { "msg_contents": "On Wed, Apr 28, 2021, at 2:18 PM, Nathan Long wrote:\n> Gotcha. Where would I go to make the PR?\n> \nThere is no such PR feature; we don't use GitHub despite of having a mirror\nthere. As Laurenz said you should create a patch (using your preferred git\ncommand) and attach to this thread. If you prefer, you can also send the patch\nto pgsql-hackers ML (add the link to this thread). The next step is to register\nyour patch to the next commitfest [1] so we don't lose track of it. For a\ncomplete reference about submitting a patch, take a look at [2].\n\n[1] https://commitfest.postgresql.org/33/\n[2] https://wiki.postgresql.org/wiki/Submitting_a_Patch\n\nRegards,\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Apr 28, 2021, at 2:18 PM, Nathan Long wrote:Gotcha. Where would I go to make the PR?There is no such PR feature; we don't use GitHub despite of having a mirrorthere. As Laurenz said you should create a patch (using your preferred gitcommand) and attach to this thread. If you prefer, you can also send the patchto pgsql-hackers ML (add the link to this thread). The next step is to registeryour patch to the next commitfest [1] so we don't lose track of it. For acomplete reference about submitting a patch, take a look at [2].[1] https://commitfest.postgresql.org/33/[2] https://wiki.postgresql.org/wiki/Submitting_a_PatchRegards,--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Sun, 02 May 2021 12:45:45 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: Clarify how triggers relate to transactions" }, { "msg_contents": "On Wed, 2021-04-28 at 13:24 +0200, Laurenz Albe wrote:\n> On Tue, 2021-04-27 at 14:26 +0000, PG Doc comments form wrote:\n> > https://www.postgresql.org/docs/current/sql-createtrigger.html mentions the\n> > word \"transaction\" only once, in reference specifically to constraint\n> > triggers: \"They can be fired either at the end of the statement causing the\n> > triggering event, or at the end of the containing transaction; in the latter\n> > case they are said to be deferred.\"\n> > \n> > If I understand correctly, it would be helpful to add this sentence or a\n> > corrected version of it: \"Triggers always execute in the same transaction as\n> > the triggering event, and if a trigger fails, the transaction is rolled\n> > back.\"\n> \n> Good idea in principle, but I'd put that information on\n> https://www.postgresql.org/docs/current/trigger-definition.html\n\nHere is a proposed patch for this.\n\nYours,\nLaurenz Albe", "msg_date": "Wed, 05 May 2021 11:55:20 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Clarify how triggers relate to transactions" }, { "msg_contents": "On Wed, 2021-05-05 at 11:55 +0200, Laurenz Albe wrote:\n> On Wed, 2021-04-28 at 13:24 +0200, Laurenz Albe wrote:\n> > On Tue, 2021-04-27 at 14:26 +0000, PG Doc comments form wrote:\n> > > https://www.postgresql.org/docs/current/sql-createtrigger.html mentions the\n> > > word \"transaction\" only once, in reference specifically to constraint\n> > > triggers: \"They can be fired either at the end of the statement causing the\n> > > triggering event, or at the end of the containing transaction; in the latter\n> > > case they are said to be deferred.\"\n> > > \n> > > If I understand correctly, it would be helpful to add this sentence or a\n> > > corrected version of it: \"Triggers always execute in the same transaction as\n> > > the triggering event, and if a trigger fails, the transaction is rolled\n> > > back.\"\n> > \n> > Good idea in principle, but I'd put that information on\n> > https://www.postgresql.org/docs/current/trigger-definition.html\n> \n> Here is a proposed patch for this.\n\nReplying to -hackers for the commitfest app.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Thu, 20 May 2021 17:53:40 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Clarify how triggers relate to transactions" }, { "msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Wed, 2021-04-28 at 13:24 +0200, Laurenz Albe wrote:\n>> On Tue, 2021-04-27 at 14:26 +0000, PG Doc comments form wrote:\n>>> If I understand correctly, it would be helpful to add this sentence or a\n>>> corrected version of it: \"Triggers always execute in the same transaction as\n>>> the triggering event, and if a trigger fails, the transaction is rolled\n>>> back.\"\n\n>> Good idea in principle, but I'd put that information on\n>> https://www.postgresql.org/docs/current/trigger-definition.html\n\n> Here is a proposed patch for this.\n\nI think this is a good idea, but I felt like you'd added the extra\nsentences in not-terribly-well-chosen places. For instance, your\nfirst addition in trigger.sgml is adding to a para that talks about\ntriggers for tables, while the next para talks about triggers for\nviews. So it seems unclear whether the statement is meant to apply\nto view triggers too.\n\nI think it'd work out best to make this a separate para after the\none that defines before/after/instead-of triggers. How do you\nlike the attached?\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 30 Jul 2021 16:20:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Clarify how triggers relate to transactions" }, { "msg_contents": "On Fri, 2021-07-30 at 16:20 -0400, Tom Lane wrote:\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > On Wed, 2021-04-28 at 13:24 +0200, Laurenz Albe wrote:\n> > > On Tue, 2021-04-27 at 14:26 +0000, PG Doc comments form wrote:\n> > > > If I understand correctly, it would be helpful to add this sentence or a\n> > > > corrected version of it: \"Triggers always execute in the same transaction as\n> > > > the triggering event, and if a trigger fails, the transaction is rolled\n> > > > back.\"\n> >\n> > Here is a proposed patch for this.\n> \n> I think this is a good idea, but I felt like you'd added the extra\n> sentences in not-terribly-well-chosen places. For instance, your\n> first addition in trigger.sgml is adding to a para that talks about\n> triggers for tables, while the next para talks about triggers for\n> views. So it seems unclear whether the statement is meant to apply\n> to view triggers too.\n> \n> I think it'd work out best to make this a separate para after the\n> one that defines before/after/instead-of triggers. How do you\n> like the attached?\n\nThat is better, and I like your patch. Thanks!\nKeeping paragraphs short is a good thing.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 18 Aug 2021 13:06:05 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Clarify how triggers relate to transactions" }, { "msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Fri, 2021-07-30 at 16:20 -0400, Tom Lane wrote:\n>> I think it'd work out best to make this a separate para after the\n>> one that defines before/after/instead-of triggers. How do you\n>> like the attached?\n\n> That is better, and I like your patch. Thanks!\n> Keeping paragraphs short is a good thing.\n\nPushed like that, then.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Sep 2021 17:25:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Clarify how triggers relate to transactions" } ]
[ { "msg_contents": "\nGreetings.\n\nThe Release Management Team (Pete Geoghegan, Michael Paquier and myself) proposes that the date of the Beta 1 release will be **Thursday May 20, 2021**, which aligns with past practice.\n\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 27 Apr 2021 16:21:19 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Release 14 Beta 1" } ]
[ { "msg_contents": "Hi,\n\nI spotted an error in the development version documentation for\nlibpq's connection parameter \"target_session_attrs\" (34.1.2 Parameter\nKey Words).\nIn the description for the \"prefer-standby\" mode, it says \"... but if\nnone of the listed hosts is a standby server, try again in all mode\".\nThere is no such \"all\" mode. It should instead say \"any\" mode.\nPatch is attached.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia", "msg_date": "Wed, 28 Apr 2021 12:55:30 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": true, "msg_subject": "Error in libpq docs for target_session_attrs, prefer-standby mode" }, { "msg_contents": "On Wed, 2021-04-28 at 12:55 +1000, Greg Nancarrow wrote:\n> I spotted an error in the development version documentation for\n> libpq's connection parameter \"target_session_attrs\" (34.1.2 Parameter\n> Key Words).\n> In the description for the \"prefer-standby\" mode, it says \"... but if\n> none of the listed hosts is a standby server, try again in all mode\".\n> There is no such \"all\" mode. It should instead say \"any\" mode.\n> Patch is attached.\n\nYou are right, and the patch is good.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 28 Apr 2021 13:32:05 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Error in libpq docs for target_session_attrs, prefer-standby\n mode" }, { "msg_contents": "On Wed, Apr 28, 2021 at 01:32:05PM +0200, Laurenz Albe wrote:\n> You are right, and the patch is good.\n\nThanks, fixed.\n--\nMichael", "msg_date": "Thu, 29 Apr 2021 11:53:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Error in libpq docs for target_session_attrs, prefer-standby mode" } ]